Jan 14 00:29:26.602621 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 14 00:29:26.602649 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Jan 13 22:00:26 -00 2026 Jan 14 00:29:26.602661 kernel: KASLR enabled Jan 14 00:29:26.602667 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 14 00:29:26.602673 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 14 00:29:26.602679 kernel: random: crng init done Jan 14 00:29:26.602686 kernel: secureboot: Secure boot disabled Jan 14 00:29:26.602692 kernel: ACPI: Early table checksum verification disabled Jan 14 00:29:26.602699 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 14 00:29:26.602707 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 14 00:29:26.602714 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:29:26.602720 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:29:26.602726 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:29:26.602732 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:29:26.602742 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:29:26.602749 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:29:26.602756 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:29:26.602762 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:29:26.602770 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:29:26.602776 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 14 00:29:26.602783 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 14 00:29:26.602790 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 14 00:29:26.602797 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 14 00:29:26.602806 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Jan 14 00:29:26.603059 kernel: Zone ranges: Jan 14 00:29:26.603066 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 14 00:29:26.603073 kernel: DMA32 empty Jan 14 00:29:26.603080 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 14 00:29:26.603086 kernel: Device empty Jan 14 00:29:26.603093 kernel: Movable zone start for each node Jan 14 00:29:26.603099 kernel: Early memory node ranges Jan 14 00:29:26.603106 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 14 00:29:26.603113 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 14 00:29:26.603119 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 14 00:29:26.603126 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 14 00:29:26.603138 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 14 00:29:26.603144 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 14 00:29:26.603151 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 14 00:29:26.603157 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 14 00:29:26.603164 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 14 00:29:26.603173 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 14 00:29:26.603182 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 14 00:29:26.603189 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 14 00:29:26.603196 kernel: psci: probing for conduit method from ACPI. Jan 14 00:29:26.603203 kernel: psci: PSCIv1.1 detected in firmware. Jan 14 00:29:26.603210 kernel: psci: Using standard PSCI v0.2 function IDs Jan 14 00:29:26.603257 kernel: psci: Trusted OS migration not required Jan 14 00:29:26.603265 kernel: psci: SMC Calling Convention v1.1 Jan 14 00:29:26.603273 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 14 00:29:26.603283 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 14 00:29:26.603290 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 14 00:29:26.603298 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 14 00:29:26.603305 kernel: Detected PIPT I-cache on CPU0 Jan 14 00:29:26.603312 kernel: CPU features: detected: GIC system register CPU interface Jan 14 00:29:26.603318 kernel: CPU features: detected: Spectre-v4 Jan 14 00:29:26.603325 kernel: CPU features: detected: Spectre-BHB Jan 14 00:29:26.603332 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 14 00:29:26.603340 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 14 00:29:26.603347 kernel: CPU features: detected: ARM erratum 1418040 Jan 14 00:29:26.603354 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 14 00:29:26.603363 kernel: alternatives: applying boot alternatives Jan 14 00:29:26.603371 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=3d3f73de8d2693594dfefd279d2c8d77c282a05a4cbc54177503d31784261f6b Jan 14 00:29:26.603378 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 00:29:26.603386 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 00:29:26.603392 kernel: Fallback order for Node 0: 0 Jan 14 00:29:26.603400 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Jan 14 00:29:26.603406 kernel: Policy zone: Normal Jan 14 00:29:26.603413 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 00:29:26.603420 kernel: software IO TLB: area num 2. Jan 14 00:29:26.603427 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 14 00:29:26.603436 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 00:29:26.603443 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 00:29:26.603451 kernel: rcu: RCU event tracing is enabled. Jan 14 00:29:26.603459 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 00:29:26.603466 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 00:29:26.603472 kernel: Tracing variant of Tasks RCU enabled. Jan 14 00:29:26.603479 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 00:29:26.603486 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 00:29:26.603494 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 00:29:26.603501 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 00:29:26.603507 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 14 00:29:26.603516 kernel: GICv3: 256 SPIs implemented Jan 14 00:29:26.603523 kernel: GICv3: 0 Extended SPIs implemented Jan 14 00:29:26.603530 kernel: Root IRQ handler: gic_handle_irq Jan 14 00:29:26.603537 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 14 00:29:26.603544 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 14 00:29:26.603551 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 14 00:29:26.603558 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 14 00:29:26.603565 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Jan 14 00:29:26.603575 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Jan 14 00:29:26.603583 kernel: GICv3: using LPI property table @0x0000000100120000 Jan 14 00:29:26.603591 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Jan 14 00:29:26.603599 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 00:29:26.603606 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 00:29:26.603613 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 14 00:29:26.603620 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 14 00:29:26.603627 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 14 00:29:26.603634 kernel: Console: colour dummy device 80x25 Jan 14 00:29:26.603642 kernel: ACPI: Core revision 20240827 Jan 14 00:29:26.603650 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 14 00:29:26.603658 kernel: pid_max: default: 32768 minimum: 301 Jan 14 00:29:26.605870 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 00:29:26.605898 kernel: landlock: Up and running. Jan 14 00:29:26.605906 kernel: SELinux: Initializing. Jan 14 00:29:26.605914 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 00:29:26.605922 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 00:29:26.605930 kernel: rcu: Hierarchical SRCU implementation. Jan 14 00:29:26.605939 kernel: rcu: Max phase no-delay instances is 400. Jan 14 00:29:26.605949 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 00:29:26.605965 kernel: Remapping and enabling EFI services. Jan 14 00:29:26.605973 kernel: smp: Bringing up secondary CPUs ... Jan 14 00:29:26.605981 kernel: Detected PIPT I-cache on CPU1 Jan 14 00:29:26.605988 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 14 00:29:26.605996 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Jan 14 00:29:26.606003 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 00:29:26.606011 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 14 00:29:26.606021 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 00:29:26.606028 kernel: SMP: Total of 2 processors activated. Jan 14 00:29:26.606041 kernel: CPU: All CPU(s) started at EL1 Jan 14 00:29:26.606051 kernel: CPU features: detected: 32-bit EL0 Support Jan 14 00:29:26.606059 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 14 00:29:26.606066 kernel: CPU features: detected: Common not Private translations Jan 14 00:29:26.606074 kernel: CPU features: detected: CRC32 instructions Jan 14 00:29:26.606082 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 14 00:29:26.606092 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 14 00:29:26.606099 kernel: CPU features: detected: LSE atomic instructions Jan 14 00:29:26.606107 kernel: CPU features: detected: Privileged Access Never Jan 14 00:29:26.606115 kernel: CPU features: detected: RAS Extension Support Jan 14 00:29:26.606123 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 14 00:29:26.606132 kernel: alternatives: applying system-wide alternatives Jan 14 00:29:26.606140 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 14 00:29:26.606149 kernel: Memory: 3885924K/4096000K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12480K init, 1038K bss, 188596K reserved, 16384K cma-reserved) Jan 14 00:29:26.606157 kernel: devtmpfs: initialized Jan 14 00:29:26.606165 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 00:29:26.606173 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 00:29:26.606181 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 14 00:29:26.606191 kernel: 0 pages in range for non-PLT usage Jan 14 00:29:26.606199 kernel: 515168 pages in range for PLT usage Jan 14 00:29:26.606207 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 00:29:26.606224 kernel: SMBIOS 3.0.0 present. Jan 14 00:29:26.606232 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 14 00:29:26.606240 kernel: DMI: Memory slots populated: 1/1 Jan 14 00:29:26.606248 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 00:29:26.606256 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 14 00:29:26.606266 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 14 00:29:26.606274 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 14 00:29:26.606282 kernel: audit: initializing netlink subsys (disabled) Jan 14 00:29:26.606289 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 00:29:26.606297 kernel: cpuidle: using governor menu Jan 14 00:29:26.606305 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 14 00:29:26.606314 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 Jan 14 00:29:26.606324 kernel: ASID allocator initialised with 32768 entries Jan 14 00:29:26.606332 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 00:29:26.606340 kernel: Serial: AMBA PL011 UART driver Jan 14 00:29:26.606348 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 00:29:26.606355 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 00:29:26.606363 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 14 00:29:26.606371 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 14 00:29:26.606381 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 00:29:26.606389 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 00:29:26.606396 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 14 00:29:26.606405 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 14 00:29:26.606413 kernel: ACPI: Added _OSI(Module Device) Jan 14 00:29:26.606420 kernel: ACPI: Added _OSI(Processor Device) Jan 14 00:29:26.606428 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 00:29:26.606437 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 00:29:26.606447 kernel: ACPI: Interpreter enabled Jan 14 00:29:26.606455 kernel: ACPI: Using GIC for interrupt routing Jan 14 00:29:26.606463 kernel: ACPI: MCFG table detected, 1 entries Jan 14 00:29:26.606471 kernel: ACPI: CPU0 has been hot-added Jan 14 00:29:26.606479 kernel: ACPI: CPU1 has been hot-added Jan 14 00:29:26.606487 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 14 00:29:26.606495 kernel: printk: legacy console [ttyAMA0] enabled Jan 14 00:29:26.606505 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 00:29:26.606735 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 00:29:26.608483 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 14 00:29:26.608622 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 14 00:29:26.608707 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 14 00:29:26.608802 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 14 00:29:26.608829 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 14 00:29:26.608838 kernel: PCI host bridge to bus 0000:00 Jan 14 00:29:26.608944 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 14 00:29:26.609130 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 14 00:29:26.609232 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 14 00:29:26.609326 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 00:29:26.609439 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 14 00:29:26.609538 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Jan 14 00:29:26.609633 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Jan 14 00:29:26.609717 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Jan 14 00:29:26.611903 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:29:26.612074 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Jan 14 00:29:26.612165 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 14 00:29:26.612296 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 14 00:29:26.612388 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 14 00:29:26.612486 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:29:26.612677 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Jan 14 00:29:26.612782 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 14 00:29:26.615010 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 14 00:29:26.615157 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:29:26.615306 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Jan 14 00:29:26.615401 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 14 00:29:26.615496 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 14 00:29:26.615580 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 14 00:29:26.615672 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:29:26.615761 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Jan 14 00:29:26.615877 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 14 00:29:26.615963 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 14 00:29:26.616050 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 14 00:29:26.616148 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:29:26.616267 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Jan 14 00:29:26.616356 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 14 00:29:26.616437 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 14 00:29:26.616519 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 14 00:29:26.616612 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:29:26.616695 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Jan 14 00:29:26.616777 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 14 00:29:26.618955 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Jan 14 00:29:26.619062 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 14 00:29:26.619178 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:29:26.619372 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Jan 14 00:29:26.619460 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 14 00:29:26.619542 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Jan 14 00:29:26.619625 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Jan 14 00:29:26.619724 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:29:26.619838 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Jan 14 00:29:26.619944 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 14 00:29:26.620039 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Jan 14 00:29:26.620141 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:29:26.620241 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Jan 14 00:29:26.620331 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 14 00:29:26.620420 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Jan 14 00:29:26.620514 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Jan 14 00:29:26.620602 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Jan 14 00:29:26.620707 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 14 00:29:26.620795 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Jan 14 00:29:26.623045 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 14 00:29:26.623171 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 14 00:29:26.623294 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 14 00:29:26.623389 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Jan 14 00:29:26.623494 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Jan 14 00:29:26.623579 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Jan 14 00:29:26.623680 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 14 00:29:26.623785 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 14 00:29:26.623915 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 14 00:29:26.624016 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 14 00:29:26.624099 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Jan 14 00:29:26.624188 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 14 00:29:26.624338 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Jan 14 00:29:26.624433 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Jan 14 00:29:26.624520 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 14 00:29:26.624618 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 14 00:29:26.624704 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Jan 14 00:29:26.624798 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Jan 14 00:29:26.627304 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 14 00:29:26.627443 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 14 00:29:26.627537 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 14 00:29:26.627627 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 14 00:29:26.627722 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 14 00:29:26.627851 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 14 00:29:26.627944 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 14 00:29:26.628039 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 14 00:29:26.628126 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 14 00:29:26.628239 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 14 00:29:26.628345 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 14 00:29:26.628433 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 14 00:29:26.628520 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 14 00:29:26.628612 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 14 00:29:26.628701 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 14 00:29:26.628784 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 14 00:29:26.628925 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 14 00:29:26.629018 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 14 00:29:26.629105 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 14 00:29:26.629198 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 14 00:29:26.629301 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 14 00:29:26.629394 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 14 00:29:26.629493 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 14 00:29:26.629581 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 14 00:29:26.629667 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 14 00:29:26.629761 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 14 00:29:26.631116 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 14 00:29:26.631434 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 14 00:29:26.631676 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 14 00:29:26.631995 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 14 00:29:26.632237 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 14 00:29:26.632483 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 14 00:29:26.632699 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 14 00:29:26.634804 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 14 00:29:26.634968 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 14 00:29:26.635061 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 14 00:29:26.635155 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 14 00:29:26.635271 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 14 00:29:26.635367 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 14 00:29:26.635461 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 14 00:29:26.635568 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 14 00:29:26.635655 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 14 00:29:26.635759 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 14 00:29:26.635909 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 14 00:29:26.636027 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 14 00:29:26.636127 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 14 00:29:26.636257 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Jan 14 00:29:26.636393 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Jan 14 00:29:26.638229 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Jan 14 00:29:26.638462 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 14 00:29:26.638570 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Jan 14 00:29:26.638673 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 14 00:29:26.638766 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Jan 14 00:29:26.638934 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 14 00:29:26.639036 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Jan 14 00:29:26.639124 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 14 00:29:26.639239 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Jan 14 00:29:26.639333 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 14 00:29:26.639528 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Jan 14 00:29:26.639682 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 14 00:29:26.641570 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Jan 14 00:29:26.641678 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 14 00:29:26.641783 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Jan 14 00:29:26.641894 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 14 00:29:26.641996 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Jan 14 00:29:26.642085 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Jan 14 00:29:26.644031 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Jan 14 00:29:26.644153 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 14 00:29:26.644288 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 14 00:29:26.644391 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 14 00:29:26.644480 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 14 00:29:26.644562 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 14 00:29:26.644643 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 14 00:29:26.644725 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 14 00:29:26.645983 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 14 00:29:26.646135 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 14 00:29:26.646270 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 14 00:29:26.646371 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 14 00:29:26.646459 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 14 00:29:26.646563 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 14 00:29:26.646657 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 14 00:29:26.646741 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 14 00:29:26.646953 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 14 00:29:26.647044 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 14 00:29:26.647125 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 14 00:29:26.647222 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 14 00:29:26.647318 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 14 00:29:26.647399 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 14 00:29:26.647479 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 14 00:29:26.647568 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 14 00:29:26.647661 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 14 00:29:26.647745 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 14 00:29:26.648938 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 14 00:29:26.649083 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 14 00:29:26.649177 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 14 00:29:26.649318 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 14 00:29:26.649421 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 14 00:29:26.649516 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 14 00:29:26.649612 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 14 00:29:26.649708 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 14 00:29:26.649791 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 14 00:29:26.649894 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 14 00:29:26.649996 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Jan 14 00:29:26.650101 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Jan 14 00:29:26.650199 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Jan 14 00:29:26.650318 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 14 00:29:26.650411 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 14 00:29:26.650515 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 14 00:29:26.650597 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 14 00:29:26.650680 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 14 00:29:26.650761 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 14 00:29:26.652905 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 14 00:29:26.653048 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 14 00:29:26.653148 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 14 00:29:26.653252 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 14 00:29:26.653343 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 14 00:29:26.653425 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 14 00:29:26.653509 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 14 00:29:26.653588 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 14 00:29:26.653665 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 14 00:29:26.653756 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 14 00:29:26.653942 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 14 00:29:26.654025 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 14 00:29:26.654112 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 14 00:29:26.654194 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 14 00:29:26.654412 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 14 00:29:26.654519 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 14 00:29:26.654601 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 14 00:29:26.654690 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 14 00:29:26.654804 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 14 00:29:26.654954 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 14 00:29:26.655051 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 14 00:29:26.655152 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 14 00:29:26.655248 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 14 00:29:26.655334 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 14 00:29:26.655431 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 14 00:29:26.655511 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 14 00:29:26.655587 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 14 00:29:26.655674 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 14 00:29:26.655760 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 14 00:29:26.655859 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 14 00:29:26.655951 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 14 00:29:26.656052 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 14 00:29:26.656129 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 14 00:29:26.656223 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 14 00:29:26.656303 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 14 00:29:26.656386 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 14 00:29:26.656397 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 14 00:29:26.656406 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 14 00:29:26.656414 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 14 00:29:26.656423 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 14 00:29:26.656431 kernel: iommu: Default domain type: Translated Jan 14 00:29:26.656440 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 14 00:29:26.656450 kernel: efivars: Registered efivars operations Jan 14 00:29:26.656457 kernel: vgaarb: loaded Jan 14 00:29:26.656465 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 14 00:29:26.656474 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 00:29:26.656482 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 00:29:26.656490 kernel: pnp: PnP ACPI init Jan 14 00:29:26.656597 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 14 00:29:26.656611 kernel: pnp: PnP ACPI: found 1 devices Jan 14 00:29:26.656619 kernel: NET: Registered PF_INET protocol family Jan 14 00:29:26.656627 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 00:29:26.656636 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 00:29:26.656645 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 00:29:26.656653 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 00:29:26.656661 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 00:29:26.656671 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 00:29:26.656679 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 00:29:26.656687 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 00:29:26.656695 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 00:29:26.656790 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 14 00:29:26.656802 kernel: PCI: CLS 0 bytes, default 64 Jan 14 00:29:26.656822 kernel: kvm [1]: HYP mode not available Jan 14 00:29:26.656833 kernel: Initialise system trusted keyrings Jan 14 00:29:26.656841 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 00:29:26.656849 kernel: Key type asymmetric registered Jan 14 00:29:26.656857 kernel: Asymmetric key parser 'x509' registered Jan 14 00:29:26.656865 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 14 00:29:26.656873 kernel: io scheduler mq-deadline registered Jan 14 00:29:26.656882 kernel: io scheduler kyber registered Jan 14 00:29:26.656891 kernel: io scheduler bfq registered Jan 14 00:29:26.656900 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 14 00:29:26.656990 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 14 00:29:26.657075 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 14 00:29:26.657167 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:29:26.657299 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 14 00:29:26.657390 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 14 00:29:26.657482 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:29:26.657730 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 14 00:29:26.657872 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 14 00:29:26.657974 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:29:26.658061 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 14 00:29:26.658143 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 14 00:29:26.658248 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:29:26.658337 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 14 00:29:26.658423 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 14 00:29:26.658505 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:29:26.658591 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 14 00:29:26.658673 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 14 00:29:26.658760 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:29:26.658914 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 14 00:29:26.659009 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 14 00:29:26.659092 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:29:26.659254 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 14 00:29:26.659355 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 14 00:29:26.659444 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:29:26.659456 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 14 00:29:26.659541 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 14 00:29:26.659626 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 14 00:29:26.659711 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:29:26.659722 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 14 00:29:26.659731 kernel: ACPI: button: Power Button [PWRB] Jan 14 00:29:26.659743 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 14 00:29:26.659867 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 14 00:29:26.659970 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 14 00:29:26.659984 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 00:29:26.659993 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 14 00:29:26.660078 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 14 00:29:26.660090 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 14 00:29:26.660101 kernel: thunder_xcv, ver 1.0 Jan 14 00:29:26.660110 kernel: thunder_bgx, ver 1.0 Jan 14 00:29:26.660118 kernel: nicpf, ver 1.0 Jan 14 00:29:26.660126 kernel: nicvf, ver 1.0 Jan 14 00:29:26.660256 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 14 00:29:26.660342 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-14T00:29:25 UTC (1768350565) Jan 14 00:29:26.660358 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 00:29:26.660367 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 14 00:29:26.660375 kernel: watchdog: NMI not fully supported Jan 14 00:29:26.660384 kernel: watchdog: Hard watchdog permanently disabled Jan 14 00:29:26.660392 kernel: NET: Registered PF_INET6 protocol family Jan 14 00:29:26.660401 kernel: Segment Routing with IPv6 Jan 14 00:29:26.660409 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 00:29:26.660421 kernel: NET: Registered PF_PACKET protocol family Jan 14 00:29:26.660429 kernel: Key type dns_resolver registered Jan 14 00:29:26.660437 kernel: registered taskstats version 1 Jan 14 00:29:26.660446 kernel: Loading compiled-in X.509 certificates Jan 14 00:29:26.660454 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: d16d100cda59d8093883df975a5384fda36b7d35' Jan 14 00:29:26.660463 kernel: Demotion targets for Node 0: null Jan 14 00:29:26.660471 kernel: Key type .fscrypt registered Jan 14 00:29:26.660479 kernel: Key type fscrypt-provisioning registered Jan 14 00:29:26.660488 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 00:29:26.660496 kernel: ima: Allocated hash algorithm: sha1 Jan 14 00:29:26.660504 kernel: ima: No architecture policies found Jan 14 00:29:26.660513 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 14 00:29:26.660521 kernel: clk: Disabling unused clocks Jan 14 00:29:26.660529 kernel: PM: genpd: Disabling unused power domains Jan 14 00:29:26.660537 kernel: Freeing unused kernel memory: 12480K Jan 14 00:29:26.660547 kernel: Run /init as init process Jan 14 00:29:26.660555 kernel: with arguments: Jan 14 00:29:26.660563 kernel: /init Jan 14 00:29:26.660571 kernel: with environment: Jan 14 00:29:26.660579 kernel: HOME=/ Jan 14 00:29:26.660587 kernel: TERM=linux Jan 14 00:29:26.660596 kernel: ACPI: bus type USB registered Jan 14 00:29:26.660605 kernel: usbcore: registered new interface driver usbfs Jan 14 00:29:26.660614 kernel: usbcore: registered new interface driver hub Jan 14 00:29:26.660622 kernel: usbcore: registered new device driver usb Jan 14 00:29:26.660716 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 14 00:29:26.660801 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 14 00:29:26.661283 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 14 00:29:26.661388 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 14 00:29:26.661485 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 14 00:29:26.661571 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 14 00:29:26.661691 kernel: hub 1-0:1.0: USB hub found Jan 14 00:29:26.661786 kernel: hub 1-0:1.0: 4 ports detected Jan 14 00:29:26.661917 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 14 00:29:26.662568 kernel: hub 2-0:1.0: USB hub found Jan 14 00:29:26.662691 kernel: hub 2-0:1.0: 4 ports detected Jan 14 00:29:26.662703 kernel: SCSI subsystem initialized Jan 14 00:29:26.662876 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Jan 14 00:29:26.662989 kernel: scsi host0: Virtio SCSI HBA Jan 14 00:29:26.663145 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 14 00:29:26.663265 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 14 00:29:26.663360 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 14 00:29:26.663449 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 14 00:29:26.663541 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 14 00:29:26.663636 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 14 00:29:26.663727 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 14 00:29:26.663738 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 00:29:26.663847 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 14 00:29:26.663942 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 14 00:29:26.664032 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 14 00:29:26.664042 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 00:29:26.664053 kernel: GPT:25804799 != 80003071 Jan 14 00:29:26.664061 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 00:29:26.664070 kernel: GPT:25804799 != 80003071 Jan 14 00:29:26.664078 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 00:29:26.664086 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 00:29:26.664174 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 14 00:29:26.664184 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 00:29:26.664195 kernel: device-mapper: uevent: version 1.0.3 Jan 14 00:29:26.664204 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 00:29:26.664259 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 14 00:29:26.664271 kernel: raid6: neonx8 gen() 15432 MB/s Jan 14 00:29:26.664280 kernel: raid6: neonx4 gen() 15543 MB/s Jan 14 00:29:26.664288 kernel: raid6: neonx2 gen() 12785 MB/s Jan 14 00:29:26.664296 kernel: raid6: neonx1 gen() 9755 MB/s Jan 14 00:29:26.664307 kernel: raid6: int64x8 gen() 6703 MB/s Jan 14 00:29:26.664442 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 14 00:29:26.664457 kernel: raid6: int64x4 gen() 6792 MB/s Jan 14 00:29:26.664466 kernel: raid6: int64x2 gen() 5992 MB/s Jan 14 00:29:26.664474 kernel: raid6: int64x1 gen() 4961 MB/s Jan 14 00:29:26.664483 kernel: raid6: using algorithm neonx4 gen() 15543 MB/s Jan 14 00:29:26.664492 kernel: raid6: .... xor() 12195 MB/s, rmw enabled Jan 14 00:29:26.664502 kernel: raid6: using neon recovery algorithm Jan 14 00:29:26.664511 kernel: xor: measuring software checksum speed Jan 14 00:29:26.664520 kernel: 8regs : 21556 MB/sec Jan 14 00:29:26.664529 kernel: 32regs : 18316 MB/sec Jan 14 00:29:26.664538 kernel: arm64_neon : 28109 MB/sec Jan 14 00:29:26.664546 kernel: xor: using function: arm64_neon (28109 MB/sec) Jan 14 00:29:26.664554 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 00:29:26.664565 kernel: BTRFS: device fsid 68b1ce8e-a637-4e91-acf8-5a2e05e289e5 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (212) Jan 14 00:29:26.664574 kernel: BTRFS info (device dm-0): first mount of filesystem 68b1ce8e-a637-4e91-acf8-5a2e05e289e5 Jan 14 00:29:26.664583 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 14 00:29:26.664687 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 14 00:29:26.664702 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 00:29:26.664710 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 00:29:26.664719 kernel: loop: module loaded Jan 14 00:29:26.664733 kernel: loop0: detected capacity change from 0 to 91832 Jan 14 00:29:26.664742 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 00:29:26.664751 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 14 00:29:26.669103 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 14 00:29:26.669143 kernel: usbcore: registered new interface driver usbhid Jan 14 00:29:26.669152 kernel: usbhid: USB HID core driver Jan 14 00:29:26.669314 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 14 00:29:26.669331 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 14 00:29:26.669341 systemd[1]: Successfully made /usr/ read-only. Jan 14 00:29:26.669353 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 00:29:26.669362 systemd[1]: Detected virtualization kvm. Jan 14 00:29:26.669376 systemd[1]: Detected architecture arm64. Jan 14 00:29:26.669385 systemd[1]: Running in initrd. Jan 14 00:29:26.669394 systemd[1]: No hostname configured, using default hostname. Jan 14 00:29:26.669403 systemd[1]: Hostname set to . Jan 14 00:29:26.669412 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 00:29:26.669545 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 14 00:29:26.669562 systemd[1]: Queued start job for default target initrd.target. Jan 14 00:29:26.669573 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 00:29:26.669582 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:29:26.669592 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:29:26.669603 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 00:29:26.669613 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 00:29:26.669624 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 00:29:26.669633 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 00:29:26.669643 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:29:26.669651 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:29:26.669661 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 00:29:26.669669 systemd[1]: Reached target paths.target - Path Units. Jan 14 00:29:26.669680 systemd[1]: Reached target slices.target - Slice Units. Jan 14 00:29:26.669690 systemd[1]: Reached target swap.target - Swaps. Jan 14 00:29:26.669698 systemd[1]: Reached target timers.target - Timer Units. Jan 14 00:29:26.669707 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 00:29:26.669716 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 00:29:26.669726 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:29:26.669734 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 00:29:26.669745 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 00:29:26.669754 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:29:26.669763 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 00:29:26.669772 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:29:26.669780 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 00:29:26.669790 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 00:29:26.669799 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 00:29:26.669859 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 00:29:26.669870 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 00:29:26.669879 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 00:29:26.669888 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 00:29:26.669897 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 00:29:26.669905 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 00:29:26.669918 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:29:26.669927 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 00:29:26.669937 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:29:26.669946 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 00:29:26.669957 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 00:29:26.670001 systemd-journald[352]: Collecting audit messages is enabled. Jan 14 00:29:26.670024 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:29:26.670036 kernel: audit: type=1130 audit(1768350566.604:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.670045 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 00:29:26.670055 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 00:29:26.670064 kernel: Bridge firewalling registered Jan 14 00:29:26.670073 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:29:26.670083 kernel: audit: type=1130 audit(1768350566.635:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.670091 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 00:29:26.670102 kernel: audit: type=1130 audit(1768350566.638:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.670111 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 00:29:26.670121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 00:29:26.670130 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:29:26.670139 kernel: audit: type=1130 audit(1768350566.662:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.670150 systemd-journald[352]: Journal started Jan 14 00:29:26.670173 systemd-journald[352]: Runtime Journal (/run/log/journal/06071e2836284cfb9a4b857d86907f83) is 8M, max 76.5M, 68.5M free. Jan 14 00:29:26.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.632314 systemd-modules-load[353]: Inserted module 'br_netfilter' Jan 14 00:29:26.672340 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 00:29:26.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.677850 kernel: audit: type=1130 audit(1768350566.672:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.683093 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 00:29:26.686516 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:29:26.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.691878 kernel: audit: type=1130 audit(1768350566.687:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.691946 kernel: audit: type=1334 audit(1768350566.690:8): prog-id=6 op=LOAD Jan 14 00:29:26.690000 audit: BPF prog-id=6 op=LOAD Jan 14 00:29:26.693493 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 00:29:26.700697 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 00:29:26.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.709887 kernel: audit: type=1130 audit(1768350566.704:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.709901 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 00:29:26.715970 systemd-tmpfiles[381]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 00:29:26.726891 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:29:26.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.732847 kernel: audit: type=1130 audit(1768350566.727:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.744791 dracut-cmdline[390]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=3d3f73de8d2693594dfefd279d2c8d77c282a05a4cbc54177503d31784261f6b Jan 14 00:29:26.771614 systemd-resolved[385]: Positive Trust Anchors: Jan 14 00:29:26.771639 systemd-resolved[385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 00:29:26.771642 systemd-resolved[385]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 00:29:26.771674 systemd-resolved[385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 00:29:26.808082 systemd-resolved[385]: Defaulting to hostname 'linux'. Jan 14 00:29:26.810132 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 00:29:26.811642 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:29:26.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.887850 kernel: Loading iSCSI transport class v2.0-870. Jan 14 00:29:26.898101 kernel: iscsi: registered transport (tcp) Jan 14 00:29:26.913847 kernel: iscsi: registered transport (qla4xxx) Jan 14 00:29:26.913919 kernel: QLogic iSCSI HBA Driver Jan 14 00:29:26.949863 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 00:29:26.983063 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:29:26.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:26.989945 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 00:29:27.057917 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 00:29:27.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:27.060489 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 00:29:27.062066 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 00:29:27.118684 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 00:29:27.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:27.119000 audit: BPF prog-id=7 op=LOAD Jan 14 00:29:27.119000 audit: BPF prog-id=8 op=LOAD Jan 14 00:29:27.123041 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:29:27.161485 systemd-udevd[626]: Using default interface naming scheme 'v257'. Jan 14 00:29:27.172124 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:29:27.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:27.176179 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 00:29:27.217303 dracut-pre-trigger[679]: rd.md=0: removing MD RAID activation Jan 14 00:29:27.237785 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 00:29:27.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:27.239000 audit: BPF prog-id=9 op=LOAD Jan 14 00:29:27.242967 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 00:29:27.270780 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 00:29:27.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:27.277466 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 00:29:27.299658 systemd-networkd[756]: lo: Link UP Jan 14 00:29:27.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:27.299668 systemd-networkd[756]: lo: Gained carrier Jan 14 00:29:27.300463 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 00:29:27.301421 systemd[1]: Reached target network.target - Network. Jan 14 00:29:27.363940 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:29:27.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:27.367727 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 00:29:27.576682 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 14 00:29:27.594191 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 14 00:29:27.620963 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 14 00:29:27.622851 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 00:29:27.634877 systemd-networkd[756]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:29:27.634887 systemd-networkd[756]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:29:27.637772 systemd-networkd[756]: eth1: Link UP Jan 14 00:29:27.637991 systemd-networkd[756]: eth1: Gained carrier Jan 14 00:29:27.638008 systemd-networkd[756]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:29:27.646092 systemd-networkd[756]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:29:27.646097 systemd-networkd[756]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:29:27.648385 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 14 00:29:27.649084 systemd-networkd[756]: eth0: Link UP Jan 14 00:29:27.649260 systemd-networkd[756]: eth0: Gained carrier Jan 14 00:29:27.649276 systemd-networkd[756]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:29:27.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:27.663404 disk-uuid[821]: Primary Header is updated. Jan 14 00:29:27.663404 disk-uuid[821]: Secondary Entries is updated. Jan 14 00:29:27.663404 disk-uuid[821]: Secondary Header is updated. Jan 14 00:29:27.655619 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:29:27.655720 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:29:27.661330 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:29:27.668071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:29:27.684120 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 00:29:27.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:27.688189 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 00:29:27.688995 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:29:27.689632 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 00:29:27.698072 systemd-networkd[756]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 14 00:29:27.700094 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 00:29:27.711274 systemd-networkd[756]: eth0: DHCPv4 address 91.99.0.249/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 14 00:29:27.740102 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:29:27.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:27.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:27.765980 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 00:29:28.726556 disk-uuid[822]: Warning: The kernel is still using the old partition table. Jan 14 00:29:28.726556 disk-uuid[822]: The new table will be used at the next reboot or after you Jan 14 00:29:28.726556 disk-uuid[822]: run partprobe(8) or kpartx(8) Jan 14 00:29:28.726556 disk-uuid[822]: The operation has completed successfully. Jan 14 00:29:28.745926 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 00:29:28.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:28.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:28.746157 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 00:29:28.749168 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 00:29:28.813860 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (852) Jan 14 00:29:28.816290 kernel: BTRFS info (device sda6): first mount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:29:28.816351 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 00:29:28.827849 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 14 00:29:28.827936 kernel: BTRFS info (device sda6): turning on async discard Jan 14 00:29:28.827952 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 00:29:28.837938 kernel: BTRFS info (device sda6): last unmount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:29:28.839741 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 00:29:28.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:28.841750 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 00:29:28.999413 ignition[871]: Ignition 2.24.0 Jan 14 00:29:28.999431 ignition[871]: Stage: fetch-offline Jan 14 00:29:28.999485 ignition[871]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:29:28.999496 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 00:29:28.999683 ignition[871]: parsed url from cmdline: "" Jan 14 00:29:28.999687 ignition[871]: no config URL provided Jan 14 00:29:29.000317 ignition[871]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 00:29:29.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:29.003170 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 00:29:29.000338 ignition[871]: no config at "/usr/lib/ignition/user.ign" Jan 14 00:29:29.005444 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 00:29:29.000345 ignition[871]: failed to fetch config: resource requires networking Jan 14 00:29:29.000896 ignition[871]: Ignition finished successfully Jan 14 00:29:29.045327 ignition[880]: Ignition 2.24.0 Jan 14 00:29:29.045344 ignition[880]: Stage: fetch Jan 14 00:29:29.045521 ignition[880]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:29:29.045529 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 00:29:29.045621 ignition[880]: parsed url from cmdline: "" Jan 14 00:29:29.045624 ignition[880]: no config URL provided Jan 14 00:29:29.045632 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 00:29:29.045638 ignition[880]: no config at "/usr/lib/ignition/user.ign" Jan 14 00:29:29.049065 systemd-networkd[756]: eth0: Gained IPv6LL Jan 14 00:29:29.045676 ignition[880]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 14 00:29:29.049954 ignition[880]: GET result: OK Jan 14 00:29:29.050074 ignition[880]: parsing config with SHA512: e0922174e62694f9106839ff029c489696583fbce0a0603455b1840258ebd126926f651cc0103de5a06abdb60a7fb6dcb818d64470e5b5c4be72674eab19cba3 Jan 14 00:29:29.058906 unknown[880]: fetched base config from "system" Jan 14 00:29:29.059350 ignition[880]: fetch: fetch complete Jan 14 00:29:29.058916 unknown[880]: fetched base config from "system" Jan 14 00:29:29.059356 ignition[880]: fetch: fetch passed Jan 14 00:29:29.058921 unknown[880]: fetched user config from "hetzner" Jan 14 00:29:29.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:29.059417 ignition[880]: Ignition finished successfully Jan 14 00:29:29.061839 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 00:29:29.069101 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 00:29:29.099094 ignition[886]: Ignition 2.24.0 Jan 14 00:29:29.099113 ignition[886]: Stage: kargs Jan 14 00:29:29.099318 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:29:29.099327 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 00:29:29.100239 ignition[886]: kargs: kargs passed Jan 14 00:29:29.100304 ignition[886]: Ignition finished successfully Jan 14 00:29:29.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:29.104998 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 00:29:29.108198 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 00:29:29.151700 ignition[892]: Ignition 2.24.0 Jan 14 00:29:29.151722 ignition[892]: Stage: disks Jan 14 00:29:29.151943 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:29:29.151953 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 00:29:29.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:29.154428 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 00:29:29.152873 ignition[892]: disks: disks passed Jan 14 00:29:29.155707 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 00:29:29.152931 ignition[892]: Ignition finished successfully Jan 14 00:29:29.157347 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 00:29:29.158983 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 00:29:29.159822 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 00:29:29.160768 systemd[1]: Reached target basic.target - Basic System. Jan 14 00:29:29.163503 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 00:29:29.212163 systemd-fsck[900]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 14 00:29:29.217929 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 00:29:29.222853 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 00:29:29.222925 kernel: audit: type=1130 audit(1768350569.219:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:29.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:29.221456 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 00:29:29.315851 kernel: EXT4-fs (sda9): mounted filesystem db887ae3-d64c-46de-9f1e-de51a801ae44 r/w with ordered data mode. Quota mode: none. Jan 14 00:29:29.317007 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 00:29:29.318635 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 00:29:29.322510 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 00:29:29.325806 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 00:29:29.335037 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 00:29:29.339646 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 00:29:29.342247 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 00:29:29.351685 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 00:29:29.357294 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 00:29:29.361061 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (908) Jan 14 00:29:29.363921 kernel: BTRFS info (device sda6): first mount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:29:29.364056 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 00:29:29.370024 systemd-networkd[756]: eth1: Gained IPv6LL Jan 14 00:29:29.386575 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 14 00:29:29.386648 kernel: BTRFS info (device sda6): turning on async discard Jan 14 00:29:29.386661 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 00:29:29.389789 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 00:29:29.432505 coreos-metadata[910]: Jan 14 00:29:29.432 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 14 00:29:29.437056 coreos-metadata[910]: Jan 14 00:29:29.435 INFO Fetch successful Jan 14 00:29:29.437056 coreos-metadata[910]: Jan 14 00:29:29.435 INFO wrote hostname ci-4547-0-0-n-a43761813d to /sysroot/etc/hostname Jan 14 00:29:29.441017 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 00:29:29.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:29.446971 kernel: audit: type=1130 audit(1768350569.443:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:29.599118 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 00:29:29.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:29.601410 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 00:29:29.605213 kernel: audit: type=1130 audit(1768350569.599:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:29.607618 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 00:29:29.628844 kernel: BTRFS info (device sda6): last unmount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:29:29.664159 ignition[1008]: INFO : Ignition 2.24.0 Jan 14 00:29:29.665056 ignition[1008]: INFO : Stage: mount Jan 14 00:29:29.666996 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:29:29.666996 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 00:29:29.668869 ignition[1008]: INFO : mount: mount passed Jan 14 00:29:29.668869 ignition[1008]: INFO : Ignition finished successfully Jan 14 00:29:29.671947 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 00:29:29.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:29.673773 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 00:29:29.679119 kernel: audit: type=1130 audit(1768350569.672:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:29.679152 kernel: audit: type=1130 audit(1768350569.675:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:29.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:29.678798 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 00:29:29.796447 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 00:29:29.801025 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 00:29:29.832865 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1020) Jan 14 00:29:29.835336 kernel: BTRFS info (device sda6): first mount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:29:29.835463 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 00:29:29.845238 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 14 00:29:29.845330 kernel: BTRFS info (device sda6): turning on async discard Jan 14 00:29:29.845348 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 00:29:29.848916 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 00:29:29.894795 ignition[1037]: INFO : Ignition 2.24.0 Jan 14 00:29:29.894795 ignition[1037]: INFO : Stage: files Jan 14 00:29:29.896315 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:29:29.896315 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 00:29:29.896315 ignition[1037]: DEBUG : files: compiled without relabeling support, skipping Jan 14 00:29:29.899058 ignition[1037]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 00:29:29.899058 ignition[1037]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 00:29:29.905576 ignition[1037]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 00:29:29.906786 ignition[1037]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 00:29:29.907752 unknown[1037]: wrote ssh authorized keys file for user: core Jan 14 00:29:29.909249 ignition[1037]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 00:29:29.915795 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 14 00:29:29.917094 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 14 00:29:30.013225 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 00:29:30.091707 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 14 00:29:30.091707 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 00:29:30.091707 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 00:29:30.091707 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 00:29:30.091707 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 00:29:30.091707 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 00:29:30.091707 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 00:29:30.091707 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 00:29:30.091707 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 00:29:30.104568 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 00:29:30.104568 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 00:29:30.104568 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 14 00:29:30.109082 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 14 00:29:30.110819 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 14 00:29:30.110819 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 14 00:29:30.443348 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 00:29:31.059104 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 14 00:29:31.062269 ignition[1037]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 00:29:31.065778 ignition[1037]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 00:29:31.072052 ignition[1037]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 00:29:31.072052 ignition[1037]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 00:29:31.072052 ignition[1037]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 14 00:29:31.072052 ignition[1037]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 14 00:29:31.072052 ignition[1037]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 14 00:29:31.072052 ignition[1037]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 14 00:29:31.072052 ignition[1037]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 14 00:29:31.072052 ignition[1037]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 00:29:31.072052 ignition[1037]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 00:29:31.072052 ignition[1037]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 00:29:31.072052 ignition[1037]: INFO : files: files passed Jan 14 00:29:31.072052 ignition[1037]: INFO : Ignition finished successfully Jan 14 00:29:31.102211 kernel: audit: type=1130 audit(1768350571.078:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.076999 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 00:29:31.080893 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 00:29:31.107553 kernel: audit: type=1130 audit(1768350571.102:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.107586 kernel: audit: type=1131 audit(1768350571.102:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.092229 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 00:29:31.101255 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 00:29:31.102857 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 00:29:31.121621 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:29:31.121621 initrd-setup-root-after-ignition[1068]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:29:31.125553 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:29:31.128236 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 00:29:31.131943 kernel: audit: type=1130 audit(1768350571.129:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.129616 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 00:29:31.133953 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 00:29:31.208920 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 00:29:31.209131 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 00:29:31.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.220026 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 00:29:31.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.221845 kernel: audit: type=1130 audit(1768350571.214:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.222255 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 00:29:31.223430 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 00:29:31.224836 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 00:29:31.261897 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 00:29:31.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.266231 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 00:29:31.293295 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 00:29:31.294573 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:29:31.296860 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:29:31.298526 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 00:29:31.300055 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 00:29:31.300442 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 00:29:31.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.303099 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 00:29:31.304608 systemd[1]: Stopped target basic.target - Basic System. Jan 14 00:29:31.306232 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 00:29:31.307403 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 00:29:31.308755 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 00:29:31.310033 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 00:29:31.311284 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 00:29:31.312382 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 00:29:31.313564 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 00:29:31.314724 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 00:29:31.315770 systemd[1]: Stopped target swap.target - Swaps. Jan 14 00:29:31.316667 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 00:29:31.317047 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 00:29:31.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.318838 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:29:31.320323 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:29:31.321233 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 00:29:31.321671 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:29:31.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.322669 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 00:29:31.322903 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 00:29:31.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.324562 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 00:29:31.324788 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 00:29:31.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.325997 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 00:29:31.326238 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 00:29:31.327389 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 00:29:31.327583 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 00:29:31.331217 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 00:29:31.335634 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 00:29:31.343670 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 00:29:31.343991 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:29:31.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.352690 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 00:29:31.352873 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:29:31.356656 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 00:29:31.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.357765 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 00:29:31.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.365087 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 00:29:31.366034 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 00:29:31.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.414182 ignition[1092]: INFO : Ignition 2.24.0 Jan 14 00:29:31.415606 ignition[1092]: INFO : Stage: umount Jan 14 00:29:31.417685 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:29:31.417685 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 00:29:31.428091 ignition[1092]: INFO : umount: umount passed Jan 14 00:29:31.428091 ignition[1092]: INFO : Ignition finished successfully Jan 14 00:29:31.418487 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 00:29:31.431503 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 00:29:31.435105 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 00:29:31.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.439324 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 00:29:31.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.439458 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 00:29:31.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.441116 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 00:29:31.441329 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 00:29:31.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.442905 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 00:29:31.443035 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 00:29:31.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.445476 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 00:29:31.445555 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 00:29:31.446741 systemd[1]: Stopped target network.target - Network. Jan 14 00:29:31.447705 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 00:29:31.447772 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 00:29:31.449075 systemd[1]: Stopped target paths.target - Path Units. Jan 14 00:29:31.450349 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 00:29:31.454426 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:29:31.456710 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 00:29:31.458724 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 00:29:31.460064 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 00:29:31.460213 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 00:29:31.461263 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 00:29:31.461346 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 00:29:31.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.462457 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 00:29:31.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.462506 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:29:31.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.463768 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 00:29:31.463870 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 00:29:31.464965 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 00:29:31.465028 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 00:29:31.466636 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 00:29:31.466716 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 00:29:31.468111 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 00:29:31.469393 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 00:29:31.481037 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 00:29:31.481244 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 00:29:31.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.486000 audit: BPF prog-id=6 op=UNLOAD Jan 14 00:29:31.490323 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 00:29:31.490478 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 00:29:31.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.497000 audit: BPF prog-id=9 op=UNLOAD Jan 14 00:29:31.498729 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 00:29:31.501261 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 00:29:31.501356 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:29:31.506561 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 00:29:31.507495 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 00:29:31.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.507599 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 00:29:31.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.511345 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 00:29:31.511431 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:29:31.516031 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 00:29:31.516125 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 00:29:31.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.520665 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:29:31.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.536702 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 00:29:31.536907 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:29:31.540601 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 00:29:31.540713 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 00:29:31.544912 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 00:29:31.544985 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:29:31.548127 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 00:29:31.548283 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 00:29:31.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.551374 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 00:29:31.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.551479 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 00:29:31.553502 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 00:29:31.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.553591 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 00:29:31.557124 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 00:29:31.558003 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 00:29:31.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.558093 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:29:31.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.559754 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 00:29:31.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.559851 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:29:31.562098 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 00:29:31.562193 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:29:31.563300 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 00:29:31.563369 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:29:31.564453 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:29:31.564519 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:29:31.593313 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 00:29:31.593685 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 00:29:31.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.602502 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 00:29:31.602915 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 00:29:31.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:31.605942 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 00:29:31.609901 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 00:29:31.649892 systemd[1]: Switching root. Jan 14 00:29:31.704943 systemd-journald[352]: Journal stopped Jan 14 00:29:33.112047 systemd-journald[352]: Received SIGTERM from PID 1 (systemd). Jan 14 00:29:33.112152 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 00:29:33.112169 kernel: SELinux: policy capability open_perms=1 Jan 14 00:29:33.112182 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 00:29:33.112197 kernel: SELinux: policy capability always_check_network=0 Jan 14 00:29:33.112212 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 00:29:33.112223 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 00:29:33.112233 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 00:29:33.112244 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 00:29:33.112254 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 00:29:33.112265 systemd[1]: Successfully loaded SELinux policy in 66.899ms. Jan 14 00:29:33.112290 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.328ms. Jan 14 00:29:33.112306 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 00:29:33.112318 systemd[1]: Detected virtualization kvm. Jan 14 00:29:33.112329 systemd[1]: Detected architecture arm64. Jan 14 00:29:33.112340 systemd[1]: Detected first boot. Jan 14 00:29:33.112351 systemd[1]: Hostname set to . Jan 14 00:29:33.112364 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 00:29:33.112376 zram_generator::config[1135]: No configuration found. Jan 14 00:29:33.112392 kernel: NET: Registered PF_VSOCK protocol family Jan 14 00:29:33.112405 systemd[1]: Populated /etc with preset unit settings. Jan 14 00:29:33.112417 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 00:29:33.112428 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 00:29:33.112439 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 00:29:33.112453 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 00:29:33.112464 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 00:29:33.112475 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 00:29:33.112486 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 00:29:33.112498 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 00:29:33.112509 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 00:29:33.112521 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 00:29:33.112536 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 00:29:33.112547 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:29:33.112559 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:29:33.112571 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 00:29:33.112583 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 00:29:33.112594 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 00:29:33.112605 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 00:29:33.112618 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 14 00:29:33.112630 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:29:33.112642 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:29:33.112656 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 00:29:33.112667 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 00:29:33.112680 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 00:29:33.112692 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 00:29:33.112703 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:29:33.112714 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 00:29:33.112725 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 00:29:33.112736 systemd[1]: Reached target slices.target - Slice Units. Jan 14 00:29:33.112750 systemd[1]: Reached target swap.target - Swaps. Jan 14 00:29:33.112762 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 00:29:33.112773 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 00:29:33.112784 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 00:29:33.112795 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:29:33.112807 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 00:29:33.116004 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:29:33.116018 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 00:29:33.116037 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 00:29:33.116050 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 00:29:33.116063 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:29:33.116075 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 00:29:33.116092 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 00:29:33.116103 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 00:29:33.116114 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 00:29:33.116184 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 00:29:33.116202 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 00:29:33.116214 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 00:29:33.116227 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 00:29:33.116239 systemd[1]: Reached target machines.target - Containers. Jan 14 00:29:33.116251 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 00:29:33.116263 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:29:33.116282 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 00:29:33.116296 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 00:29:33.116308 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 00:29:33.116319 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 00:29:33.116332 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 00:29:33.116346 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 00:29:33.116357 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 00:29:33.116370 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 00:29:33.116381 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 00:29:33.116392 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 00:29:33.116404 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 00:29:33.116415 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 00:29:33.116426 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:29:33.116438 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 00:29:33.116449 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 00:29:33.116462 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 00:29:33.116473 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 00:29:33.116486 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 00:29:33.116497 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 00:29:33.116508 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 00:29:33.116519 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 00:29:33.116530 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 00:29:33.116541 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 00:29:33.116554 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 00:29:33.116567 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 00:29:33.116578 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:29:33.116590 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 00:29:33.116600 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 00:29:33.116613 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 00:29:33.116624 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 00:29:33.116635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 00:29:33.116646 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 00:29:33.116657 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 00:29:33.116668 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 00:29:33.116679 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 00:29:33.116695 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 00:29:33.116708 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 00:29:33.116720 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 00:29:33.116731 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 00:29:33.116744 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 00:29:33.116755 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 00:29:33.116768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:29:33.116784 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:29:33.116797 kernel: fuse: init (API version 7.41) Jan 14 00:29:33.125853 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 00:29:33.125914 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 00:29:33.125928 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 00:29:33.125947 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 00:29:33.125959 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 00:29:33.125973 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 00:29:33.125984 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 00:29:33.125996 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 00:29:33.126009 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 00:29:33.126057 systemd-journald[1200]: Collecting audit messages is enabled. Jan 14 00:29:33.126084 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:29:33.126098 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 00:29:33.126112 systemd-journald[1200]: Journal started Jan 14 00:29:33.126186 systemd-journald[1200]: Runtime Journal (/run/log/journal/06071e2836284cfb9a4b857d86907f83) is 8M, max 76.5M, 68.5M free. Jan 14 00:29:32.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:32.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:32.934000 audit: BPF prog-id=14 op=UNLOAD Jan 14 00:29:32.934000 audit: BPF prog-id=13 op=UNLOAD Jan 14 00:29:32.935000 audit: BPF prog-id=15 op=LOAD Jan 14 00:29:32.935000 audit: BPF prog-id=16 op=LOAD Jan 14 00:29:32.935000 audit: BPF prog-id=17 op=LOAD Jan 14 00:29:32.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.099000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 00:29:33.099000 audit[1200]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffd69cf790 a2=4000 a3=0 items=0 ppid=1 pid=1200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:33.099000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 00:29:33.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:32.711710 systemd[1]: Queued start job for default target multi-user.target. Jan 14 00:29:33.137587 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 00:29:33.137631 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 00:29:33.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:32.736433 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 00:29:32.737187 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 00:29:33.149598 kernel: ACPI: bus type drm_connector registered Jan 14 00:29:33.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.141902 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 00:29:33.148151 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 00:29:33.148408 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 00:29:33.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.170627 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 00:29:33.175234 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 00:29:33.183972 kernel: loop1: detected capacity change from 0 to 211168 Jan 14 00:29:33.186108 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 00:29:33.194372 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 00:29:33.206044 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:29:33.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.215395 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Jan 14 00:29:33.215413 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Jan 14 00:29:33.230154 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:29:33.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.248895 systemd-journald[1200]: Time spent on flushing to /var/log/journal/06071e2836284cfb9a4b857d86907f83 is 131.066ms for 1299 entries. Jan 14 00:29:33.248895 systemd-journald[1200]: System Journal (/var/log/journal/06071e2836284cfb9a4b857d86907f83) is 8M, max 588.1M, 580.1M free. Jan 14 00:29:33.428784 systemd-journald[1200]: Received client request to flush runtime journal. Jan 14 00:29:33.428976 kernel: loop2: detected capacity change from 0 to 8 Jan 14 00:29:33.429014 kernel: loop3: detected capacity change from 0 to 100192 Jan 14 00:29:33.429035 kernel: loop4: detected capacity change from 0 to 45344 Jan 14 00:29:33.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.273234 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 00:29:33.278972 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 00:29:33.299362 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:29:33.432580 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 00:29:33.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.444031 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 00:29:33.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.446000 audit: BPF prog-id=18 op=LOAD Jan 14 00:29:33.446000 audit: BPF prog-id=19 op=LOAD Jan 14 00:29:33.446000 audit: BPF prog-id=20 op=LOAD Jan 14 00:29:33.450000 audit: BPF prog-id=21 op=LOAD Jan 14 00:29:33.450055 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 00:29:33.453097 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 00:29:33.458215 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 00:29:33.463852 kernel: loop5: detected capacity change from 0 to 211168 Jan 14 00:29:33.471000 audit: BPF prog-id=22 op=LOAD Jan 14 00:29:33.471000 audit: BPF prog-id=23 op=LOAD Jan 14 00:29:33.472000 audit: BPF prog-id=24 op=LOAD Jan 14 00:29:33.476000 audit: BPF prog-id=25 op=LOAD Jan 14 00:29:33.476000 audit: BPF prog-id=26 op=LOAD Jan 14 00:29:33.476000 audit: BPF prog-id=27 op=LOAD Jan 14 00:29:33.475033 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 00:29:33.479000 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 00:29:33.515782 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Jan 14 00:29:33.515801 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Jan 14 00:29:33.527928 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:29:33.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.590559 kernel: loop6: detected capacity change from 0 to 8 Jan 14 00:29:33.590651 kernel: loop7: detected capacity change from 0 to 100192 Jan 14 00:29:33.590681 kernel: loop1: detected capacity change from 0 to 45344 Jan 14 00:29:33.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.555925 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 00:29:33.571591 systemd-nsresourced[1283]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 00:29:33.579761 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 00:29:33.615640 (sd-merge)[1281]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-hetzner.raw'. Jan 14 00:29:33.623638 (sd-merge)[1281]: Merged extensions into '/usr'. Jan 14 00:29:33.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:33.637929 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 00:29:33.643501 systemd[1]: Reload requested from client PID 1222 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 00:29:33.643523 systemd[1]: Reloading... Jan 14 00:29:33.692748 systemd-oomd[1278]: No swap; memory pressure usage will be degraded Jan 14 00:29:33.699714 systemd-resolved[1279]: Positive Trust Anchors: Jan 14 00:29:33.700168 systemd-resolved[1279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 00:29:33.700250 systemd-resolved[1279]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 00:29:33.700328 systemd-resolved[1279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 00:29:33.714582 systemd-resolved[1279]: Using system hostname 'ci-4547-0-0-n-a43761813d'. Jan 14 00:29:33.770864 zram_generator::config[1330]: No configuration found. Jan 14 00:29:33.982423 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 00:29:33.982608 systemd[1]: Reloading finished in 338 ms. Jan 14 00:29:34.003939 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 00:29:34.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.005142 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 00:29:34.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.007882 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 00:29:34.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.012277 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:29:34.015845 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 00:29:34.023421 systemd[1]: Starting ensure-sysext.service... Jan 14 00:29:34.031000 audit: BPF prog-id=28 op=LOAD Jan 14 00:29:34.031000 audit: BPF prog-id=22 op=UNLOAD Jan 14 00:29:34.031000 audit: BPF prog-id=29 op=LOAD Jan 14 00:29:34.031000 audit: BPF prog-id=30 op=LOAD Jan 14 00:29:34.031000 audit: BPF prog-id=23 op=UNLOAD Jan 14 00:29:34.031000 audit: BPF prog-id=24 op=UNLOAD Jan 14 00:29:34.032000 audit: BPF prog-id=31 op=LOAD Jan 14 00:29:34.032000 audit: BPF prog-id=21 op=UNLOAD Jan 14 00:29:34.033000 audit: BPF prog-id=32 op=LOAD Jan 14 00:29:34.033000 audit: BPF prog-id=18 op=UNLOAD Jan 14 00:29:34.036000 audit: BPF prog-id=33 op=LOAD Jan 14 00:29:34.036000 audit: BPF prog-id=34 op=LOAD Jan 14 00:29:34.036000 audit: BPF prog-id=19 op=UNLOAD Jan 14 00:29:34.037000 audit: BPF prog-id=20 op=UNLOAD Jan 14 00:29:34.037000 audit: BPF prog-id=35 op=LOAD Jan 14 00:29:34.037000 audit: BPF prog-id=25 op=UNLOAD Jan 14 00:29:34.037000 audit: BPF prog-id=36 op=LOAD Jan 14 00:29:34.037000 audit: BPF prog-id=37 op=LOAD Jan 14 00:29:34.037000 audit: BPF prog-id=26 op=UNLOAD Jan 14 00:29:34.037000 audit: BPF prog-id=27 op=UNLOAD Jan 14 00:29:34.038000 audit: BPF prog-id=38 op=LOAD Jan 14 00:29:34.038000 audit: BPF prog-id=15 op=UNLOAD Jan 14 00:29:34.038000 audit: BPF prog-id=39 op=LOAD Jan 14 00:29:34.038000 audit: BPF prog-id=40 op=LOAD Jan 14 00:29:34.038000 audit: BPF prog-id=16 op=UNLOAD Jan 14 00:29:34.039000 audit: BPF prog-id=17 op=UNLOAD Jan 14 00:29:34.028040 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 00:29:34.042363 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 00:29:34.048160 systemd[1]: Reload requested from client PID 1367 ('systemctl') (unit ensure-sysext.service)... Jan 14 00:29:34.048184 systemd[1]: Reloading... Jan 14 00:29:34.062700 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 00:29:34.063359 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 00:29:34.063874 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 00:29:34.066352 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Jan 14 00:29:34.066597 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Jan 14 00:29:34.082561 systemd-tmpfiles[1368]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 00:29:34.082670 systemd-tmpfiles[1368]: Skipping /boot Jan 14 00:29:34.127175 systemd-tmpfiles[1368]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 00:29:34.127191 systemd-tmpfiles[1368]: Skipping /boot Jan 14 00:29:34.154844 zram_generator::config[1402]: No configuration found. Jan 14 00:29:34.346141 systemd[1]: Reloading finished in 297 ms. Jan 14 00:29:34.364899 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 00:29:34.368828 kernel: kauditd_printk_skb: 136 callbacks suppressed Jan 14 00:29:34.368922 kernel: audit: type=1130 audit(1768350574.364:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.370000 audit: BPF prog-id=41 op=LOAD Jan 14 00:29:34.374855 kernel: audit: type=1334 audit(1768350574.370:179): prog-id=41 op=LOAD Jan 14 00:29:34.374958 kernel: audit: type=1334 audit(1768350574.372:180): prog-id=31 op=UNLOAD Jan 14 00:29:34.374977 kernel: audit: type=1334 audit(1768350574.372:181): prog-id=42 op=LOAD Jan 14 00:29:34.372000 audit: BPF prog-id=31 op=UNLOAD Jan 14 00:29:34.372000 audit: BPF prog-id=42 op=LOAD Jan 14 00:29:34.374000 audit: BPF prog-id=28 op=UNLOAD Jan 14 00:29:34.374000 audit: BPF prog-id=43 op=LOAD Jan 14 00:29:34.376907 kernel: audit: type=1334 audit(1768350574.374:182): prog-id=28 op=UNLOAD Jan 14 00:29:34.377018 kernel: audit: type=1334 audit(1768350574.374:183): prog-id=43 op=LOAD Jan 14 00:29:34.377043 kernel: audit: type=1334 audit(1768350574.374:184): prog-id=44 op=LOAD Jan 14 00:29:34.374000 audit: BPF prog-id=44 op=LOAD Jan 14 00:29:34.374000 audit: BPF prog-id=29 op=UNLOAD Jan 14 00:29:34.377876 kernel: audit: type=1334 audit(1768350574.374:185): prog-id=29 op=UNLOAD Jan 14 00:29:34.374000 audit: BPF prog-id=30 op=UNLOAD Jan 14 00:29:34.374000 audit: BPF prog-id=45 op=LOAD Jan 14 00:29:34.380842 kernel: audit: type=1334 audit(1768350574.374:186): prog-id=30 op=UNLOAD Jan 14 00:29:34.380941 kernel: audit: type=1334 audit(1768350574.374:187): prog-id=45 op=LOAD Jan 14 00:29:34.374000 audit: BPF prog-id=35 op=UNLOAD Jan 14 00:29:34.374000 audit: BPF prog-id=46 op=LOAD Jan 14 00:29:34.374000 audit: BPF prog-id=47 op=LOAD Jan 14 00:29:34.374000 audit: BPF prog-id=36 op=UNLOAD Jan 14 00:29:34.374000 audit: BPF prog-id=37 op=UNLOAD Jan 14 00:29:34.379000 audit: BPF prog-id=48 op=LOAD Jan 14 00:29:34.379000 audit: BPF prog-id=38 op=UNLOAD Jan 14 00:29:34.379000 audit: BPF prog-id=49 op=LOAD Jan 14 00:29:34.379000 audit: BPF prog-id=50 op=LOAD Jan 14 00:29:34.379000 audit: BPF prog-id=39 op=UNLOAD Jan 14 00:29:34.379000 audit: BPF prog-id=40 op=UNLOAD Jan 14 00:29:34.379000 audit: BPF prog-id=51 op=LOAD Jan 14 00:29:34.379000 audit: BPF prog-id=32 op=UNLOAD Jan 14 00:29:34.379000 audit: BPF prog-id=52 op=LOAD Jan 14 00:29:34.380000 audit: BPF prog-id=53 op=LOAD Jan 14 00:29:34.380000 audit: BPF prog-id=33 op=UNLOAD Jan 14 00:29:34.380000 audit: BPF prog-id=34 op=UNLOAD Jan 14 00:29:34.384135 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:29:34.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.394663 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 00:29:34.399155 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 00:29:34.417226 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 00:29:34.423369 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 00:29:34.424000 audit: BPF prog-id=7 op=UNLOAD Jan 14 00:29:34.424000 audit: BPF prog-id=54 op=LOAD Jan 14 00:29:34.424000 audit: BPF prog-id=55 op=LOAD Jan 14 00:29:34.425000 audit: BPF prog-id=8 op=UNLOAD Jan 14 00:29:34.428935 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:29:34.436392 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 00:29:34.443657 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:29:34.448266 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 00:29:34.457306 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 00:29:34.463210 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 00:29:34.464164 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:29:34.464404 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:29:34.464512 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:29:34.469375 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:29:34.469577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:29:34.469733 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:29:34.469856 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:29:34.473965 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:29:34.494462 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 00:29:34.496730 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:29:34.497071 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:29:34.497286 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:29:34.504000 audit[1446]: SYSTEM_BOOT pid=1446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.505951 systemd[1]: Finished ensure-sysext.service. Jan 14 00:29:34.512000 audit: BPF prog-id=56 op=LOAD Jan 14 00:29:34.516201 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 00:29:34.538875 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 00:29:34.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.541286 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 00:29:34.541923 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 00:29:34.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.551204 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 00:29:34.556008 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 00:29:34.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.560617 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 00:29:34.576534 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 00:29:34.576919 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 00:29:34.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.580037 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 00:29:34.580481 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 00:29:34.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.585052 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 00:29:34.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:34.586206 systemd-udevd[1444]: Using default interface naming scheme 'v257'. Jan 14 00:29:34.588561 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 00:29:34.612000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 00:29:34.612000 audit[1482]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffc000d00 a2=420 a3=0 items=0 ppid=1440 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:34.612000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:29:34.613726 augenrules[1482]: No rules Jan 14 00:29:34.617524 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 00:29:34.618197 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 00:29:34.644252 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 00:29:34.646142 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 00:29:34.659290 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 00:29:34.660956 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 00:29:34.672459 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:29:34.680948 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 00:29:34.827982 systemd-networkd[1497]: lo: Link UP Jan 14 00:29:34.828518 systemd-networkd[1497]: lo: Gained carrier Jan 14 00:29:34.830103 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 00:29:34.832665 systemd[1]: Reached target network.target - Network. Jan 14 00:29:34.837739 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 00:29:34.844778 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 00:29:34.915429 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 00:29:34.917515 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 14 00:29:34.939562 systemd-networkd[1497]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:29:34.940749 systemd-networkd[1497]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:29:34.942201 systemd-networkd[1497]: eth0: Link UP Jan 14 00:29:34.943157 systemd-networkd[1497]: eth0: Gained carrier Jan 14 00:29:34.943195 systemd-networkd[1497]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:29:34.999392 systemd-networkd[1497]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:29:34.999404 systemd-networkd[1497]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:29:35.006041 systemd-networkd[1497]: eth0: DHCPv4 address 91.99.0.249/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 14 00:29:35.008948 systemd-networkd[1497]: eth1: Link UP Jan 14 00:29:35.014571 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 14 00:29:35.017576 systemd-networkd[1497]: eth1: Gained carrier Jan 14 00:29:35.017613 systemd-networkd[1497]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:29:35.035321 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 14 00:29:35.079279 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 00:29:35.093134 systemd-networkd[1497]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 14 00:29:35.093584 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 14 00:29:35.094003 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 14 00:29:35.169713 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 14 00:29:35.175018 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 00:29:35.223150 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 00:29:35.251083 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 14 00:29:35.251165 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 14 00:29:35.251220 kernel: [drm] features: -context_init Jan 14 00:29:35.252753 ldconfig[1442]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 00:29:35.253254 kernel: [drm] number of scanouts: 1 Jan 14 00:29:35.253909 kernel: [drm] number of cap sets: 0 Jan 14 00:29:35.259841 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 14 00:29:35.260582 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 14 00:29:35.261016 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 00:29:35.264555 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:29:35.267771 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 00:29:35.275327 kernel: Console: switching to colour frame buffer device 160x50 Jan 14 00:29:35.286867 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 14 00:29:35.290993 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 00:29:35.333427 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 00:29:35.335280 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:29:35.335417 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:29:35.335454 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:29:35.339163 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 00:29:35.340520 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 00:29:35.358326 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 00:29:35.359696 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 00:29:35.361395 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 00:29:35.362903 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 00:29:35.364345 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 00:29:35.404155 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 00:29:35.406898 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 00:29:35.409228 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 00:29:35.418186 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 00:29:35.420355 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 00:29:35.421448 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 00:29:35.423063 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 00:29:35.425176 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 00:29:35.426034 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 00:29:35.426874 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 00:29:35.427787 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 00:29:35.428660 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 00:29:35.431421 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 00:29:35.431464 systemd[1]: Reached target paths.target - Path Units. Jan 14 00:29:35.432834 systemd[1]: Reached target timers.target - Timer Units. Jan 14 00:29:35.435522 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 00:29:35.438479 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 00:29:35.444941 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 00:29:35.446173 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 00:29:35.447167 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 00:29:35.463749 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 00:29:35.465354 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 00:29:35.467073 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 00:29:35.480445 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 00:29:35.481160 systemd[1]: Reached target basic.target - Basic System. Jan 14 00:29:35.481741 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 00:29:35.481771 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 00:29:35.483674 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 00:29:35.487060 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 00:29:35.490343 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 00:29:35.498965 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 00:29:35.503497 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 00:29:35.507279 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 00:29:35.507970 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 00:29:35.509456 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 00:29:35.514748 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 00:29:35.523393 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 14 00:29:35.525976 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 00:29:35.538591 jq[1571]: false Jan 14 00:29:35.539582 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 00:29:35.548196 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 00:29:35.555119 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:29:35.562717 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 00:29:35.567401 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 00:29:35.571612 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 00:29:35.577920 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 00:29:35.584426 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 00:29:35.587915 extend-filesystems[1572]: Found /dev/sda6 Jan 14 00:29:35.586437 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 00:29:35.586713 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 00:29:35.608173 extend-filesystems[1572]: Found /dev/sda9 Jan 14 00:29:35.607641 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 00:29:35.617002 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 00:29:35.644687 extend-filesystems[1572]: Checking size of /dev/sda9 Jan 14 00:29:35.653058 jq[1588]: true Jan 14 00:29:35.668765 coreos-metadata[1568]: Jan 14 00:29:35.668 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 14 00:29:35.674455 coreos-metadata[1568]: Jan 14 00:29:35.674 INFO Fetch successful Jan 14 00:29:35.674455 coreos-metadata[1568]: Jan 14 00:29:35.674 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 14 00:29:35.674455 coreos-metadata[1568]: Jan 14 00:29:35.674 INFO Fetch successful Jan 14 00:29:35.680003 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 00:29:35.681937 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 00:29:35.721859 dbus-daemon[1569]: [system] SELinux support is enabled Jan 14 00:29:35.725293 tar[1593]: linux-arm64/LICENSE Jan 14 00:29:35.725293 tar[1593]: linux-arm64/helm Jan 14 00:29:35.722213 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 00:29:35.726844 extend-filesystems[1572]: Resized partition /dev/sda9 Jan 14 00:29:35.740361 extend-filesystems[1626]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 00:29:35.729927 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 00:29:35.729965 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 00:29:35.740767 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 00:29:35.740792 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 00:29:35.750048 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 8410107 blocks Jan 14 00:29:35.757989 update_engine[1587]: I20260114 00:29:35.744691 1587 main.cc:92] Flatcar Update Engine starting Jan 14 00:29:35.763820 jq[1615]: true Jan 14 00:29:35.775427 update_engine[1587]: I20260114 00:29:35.774854 1587 update_check_scheduler.cc:74] Next update check in 3m22s Jan 14 00:29:35.776876 systemd[1]: Started update-engine.service - Update Engine. Jan 14 00:29:35.806329 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 00:29:35.866631 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:29:35.868086 systemd-logind[1580]: New seat seat0. Jan 14 00:29:35.872673 systemd-logind[1580]: Watching system buttons on /dev/input/event0 (Power Button) Jan 14 00:29:35.872690 systemd-logind[1580]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 14 00:29:35.876964 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 00:29:35.924442 kernel: EXT4-fs (sda9): resized filesystem to 8410107 Jan 14 00:29:35.961469 extend-filesystems[1626]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 14 00:29:35.961469 extend-filesystems[1626]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 14 00:29:35.961469 extend-filesystems[1626]: The filesystem on /dev/sda9 is now 8410107 (4k) blocks long. Jan 14 00:29:35.975940 extend-filesystems[1572]: Resized filesystem in /dev/sda9 Jan 14 00:29:35.966699 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 00:29:35.980147 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 00:29:35.985196 bash[1651]: Updated "/home/core/.ssh/authorized_keys" Jan 14 00:29:35.989138 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 00:29:35.992598 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 00:29:36.003734 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 00:29:36.006945 systemd[1]: Starting sshkeys.service... Jan 14 00:29:36.081378 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 14 00:29:36.092921 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 14 00:29:36.132511 containerd[1612]: time="2026-01-14T00:29:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 00:29:36.137602 locksmithd[1633]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 00:29:36.141281 containerd[1612]: time="2026-01-14T00:29:36.137969360Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 00:29:36.177285 coreos-metadata[1664]: Jan 14 00:29:36.176 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 14 00:29:36.180023 coreos-metadata[1664]: Jan 14 00:29:36.179 INFO Fetch successful Jan 14 00:29:36.183075 containerd[1612]: time="2026-01-14T00:29:36.183020960Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.36µs" Jan 14 00:29:36.183663 containerd[1612]: time="2026-01-14T00:29:36.183534000Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 00:29:36.183663 containerd[1612]: time="2026-01-14T00:29:36.183603320Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 00:29:36.183663 containerd[1612]: time="2026-01-14T00:29:36.183618720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 00:29:36.183606 unknown[1664]: wrote ssh authorized keys file for user: core Jan 14 00:29:36.186624 containerd[1612]: time="2026-01-14T00:29:36.185307720Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 00:29:36.186624 containerd[1612]: time="2026-01-14T00:29:36.185356240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 00:29:36.186624 containerd[1612]: time="2026-01-14T00:29:36.185437200Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 00:29:36.186624 containerd[1612]: time="2026-01-14T00:29:36.185461560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 00:29:36.186624 containerd[1612]: time="2026-01-14T00:29:36.185865240Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 00:29:36.186624 containerd[1612]: time="2026-01-14T00:29:36.185887640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 00:29:36.186624 containerd[1612]: time="2026-01-14T00:29:36.185902280Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 00:29:36.186624 containerd[1612]: time="2026-01-14T00:29:36.185911800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 00:29:36.186624 containerd[1612]: time="2026-01-14T00:29:36.186135440Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 00:29:36.186624 containerd[1612]: time="2026-01-14T00:29:36.186157480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 00:29:36.186624 containerd[1612]: time="2026-01-14T00:29:36.186255720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 00:29:36.186624 containerd[1612]: time="2026-01-14T00:29:36.186523960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 00:29:36.186978 containerd[1612]: time="2026-01-14T00:29:36.186560120Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 00:29:36.186978 containerd[1612]: time="2026-01-14T00:29:36.186574040Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 00:29:36.187023 sshd_keygen[1604]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 00:29:36.190836 containerd[1612]: time="2026-01-14T00:29:36.190281720Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 00:29:36.191177 containerd[1612]: time="2026-01-14T00:29:36.191144200Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 00:29:36.191372 containerd[1612]: time="2026-01-14T00:29:36.191350960Z" level=info msg="metadata content store policy set" policy=shared Jan 14 00:29:36.207267 containerd[1612]: time="2026-01-14T00:29:36.206316000Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 00:29:36.207267 containerd[1612]: time="2026-01-14T00:29:36.206416000Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 00:29:36.207267 containerd[1612]: time="2026-01-14T00:29:36.206566960Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 00:29:36.207267 containerd[1612]: time="2026-01-14T00:29:36.206583800Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 00:29:36.207267 containerd[1612]: time="2026-01-14T00:29:36.206599040Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 00:29:36.207267 containerd[1612]: time="2026-01-14T00:29:36.206613400Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 00:29:36.207267 containerd[1612]: time="2026-01-14T00:29:36.206627040Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 00:29:36.207267 containerd[1612]: time="2026-01-14T00:29:36.206637480Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 00:29:36.207267 containerd[1612]: time="2026-01-14T00:29:36.206649720Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 00:29:36.207267 containerd[1612]: time="2026-01-14T00:29:36.206665280Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 00:29:36.207267 containerd[1612]: time="2026-01-14T00:29:36.206678800Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 00:29:36.207267 containerd[1612]: time="2026-01-14T00:29:36.206691240Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 00:29:36.207267 containerd[1612]: time="2026-01-14T00:29:36.206702360Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 00:29:36.207267 containerd[1612]: time="2026-01-14T00:29:36.206716680Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 00:29:36.207622 containerd[1612]: time="2026-01-14T00:29:36.206921280Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 00:29:36.207622 containerd[1612]: time="2026-01-14T00:29:36.206947960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 00:29:36.207622 containerd[1612]: time="2026-01-14T00:29:36.206966040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 00:29:36.207622 containerd[1612]: time="2026-01-14T00:29:36.206978000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 00:29:36.207622 containerd[1612]: time="2026-01-14T00:29:36.206994000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 00:29:36.207622 containerd[1612]: time="2026-01-14T00:29:36.207009280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 00:29:36.207622 containerd[1612]: time="2026-01-14T00:29:36.207023680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 00:29:36.207622 containerd[1612]: time="2026-01-14T00:29:36.207035920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 00:29:36.207622 containerd[1612]: time="2026-01-14T00:29:36.207048320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 00:29:36.207622 containerd[1612]: time="2026-01-14T00:29:36.207062200Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 00:29:36.207622 containerd[1612]: time="2026-01-14T00:29:36.207072720Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 00:29:36.207622 containerd[1612]: time="2026-01-14T00:29:36.207157280Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 00:29:36.207622 containerd[1612]: time="2026-01-14T00:29:36.207210880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 00:29:36.207622 containerd[1612]: time="2026-01-14T00:29:36.207227000Z" level=info msg="Start snapshots syncer" Jan 14 00:29:36.208278 containerd[1612]: time="2026-01-14T00:29:36.207894160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 00:29:36.209338 containerd[1612]: time="2026-01-14T00:29:36.209273520Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 00:29:36.209692 containerd[1612]: time="2026-01-14T00:29:36.209600880Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 00:29:36.209779 containerd[1612]: time="2026-01-14T00:29:36.209765000Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 00:29:36.210313 containerd[1612]: time="2026-01-14T00:29:36.210060840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 00:29:36.210313 containerd[1612]: time="2026-01-14T00:29:36.210140200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 00:29:36.210313 containerd[1612]: time="2026-01-14T00:29:36.210161920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 00:29:36.210313 containerd[1612]: time="2026-01-14T00:29:36.210174320Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 00:29:36.210313 containerd[1612]: time="2026-01-14T00:29:36.210189040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 00:29:36.210313 containerd[1612]: time="2026-01-14T00:29:36.210201880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 00:29:36.210313 containerd[1612]: time="2026-01-14T00:29:36.210213880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 00:29:36.210313 containerd[1612]: time="2026-01-14T00:29:36.210224920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 00:29:36.210313 containerd[1612]: time="2026-01-14T00:29:36.210238200Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 00:29:36.212792 containerd[1612]: time="2026-01-14T00:29:36.211304000Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 00:29:36.212792 containerd[1612]: time="2026-01-14T00:29:36.211351200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 00:29:36.212792 containerd[1612]: time="2026-01-14T00:29:36.211363160Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 00:29:36.212792 containerd[1612]: time="2026-01-14T00:29:36.211375280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 00:29:36.212792 containerd[1612]: time="2026-01-14T00:29:36.211393760Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 00:29:36.212792 containerd[1612]: time="2026-01-14T00:29:36.211406360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 00:29:36.212792 containerd[1612]: time="2026-01-14T00:29:36.211418720Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 00:29:36.212792 containerd[1612]: time="2026-01-14T00:29:36.211510520Z" level=info msg="runtime interface created" Jan 14 00:29:36.212792 containerd[1612]: time="2026-01-14T00:29:36.211516480Z" level=info msg="created NRI interface" Jan 14 00:29:36.212792 containerd[1612]: time="2026-01-14T00:29:36.211525920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 00:29:36.212792 containerd[1612]: time="2026-01-14T00:29:36.211543120Z" level=info msg="Connect containerd service" Jan 14 00:29:36.212792 containerd[1612]: time="2026-01-14T00:29:36.211583880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 00:29:36.214657 containerd[1612]: time="2026-01-14T00:29:36.214601560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 00:29:36.227933 update-ssh-keys[1678]: Updated "/home/core/.ssh/authorized_keys" Jan 14 00:29:36.226804 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 14 00:29:36.233920 systemd[1]: Finished sshkeys.service. Jan 14 00:29:36.239199 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 00:29:36.247285 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 00:29:36.281068 systemd-networkd[1497]: eth0: Gained IPv6LL Jan 14 00:29:36.281735 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 14 00:29:36.291147 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 00:29:36.293398 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 00:29:36.294926 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 00:29:36.297026 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 00:29:36.302503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:29:36.310200 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 00:29:36.313563 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 00:29:36.339163 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 00:29:36.349647 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 00:29:36.353387 containerd[1612]: time="2026-01-14T00:29:36.352306360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 00:29:36.353387 containerd[1612]: time="2026-01-14T00:29:36.352381320Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 00:29:36.353387 containerd[1612]: time="2026-01-14T00:29:36.352420280Z" level=info msg="Start subscribing containerd event" Jan 14 00:29:36.353387 containerd[1612]: time="2026-01-14T00:29:36.352466680Z" level=info msg="Start recovering state" Jan 14 00:29:36.353387 containerd[1612]: time="2026-01-14T00:29:36.352563200Z" level=info msg="Start event monitor" Jan 14 00:29:36.353387 containerd[1612]: time="2026-01-14T00:29:36.352576160Z" level=info msg="Start cni network conf syncer for default" Jan 14 00:29:36.353387 containerd[1612]: time="2026-01-14T00:29:36.352583640Z" level=info msg="Start streaming server" Jan 14 00:29:36.353387 containerd[1612]: time="2026-01-14T00:29:36.352592960Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 00:29:36.353387 containerd[1612]: time="2026-01-14T00:29:36.352602160Z" level=info msg="runtime interface starting up..." Jan 14 00:29:36.353387 containerd[1612]: time="2026-01-14T00:29:36.352609800Z" level=info msg="starting plugins..." Jan 14 00:29:36.353387 containerd[1612]: time="2026-01-14T00:29:36.352626520Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 00:29:36.356626 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 14 00:29:36.364590 containerd[1612]: time="2026-01-14T00:29:36.361424280Z" level=info msg="containerd successfully booted in 0.233597s" Jan 14 00:29:36.359165 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 00:29:36.360520 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 00:29:36.411661 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 00:29:36.529267 tar[1593]: linux-arm64/README.md Jan 14 00:29:36.548535 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 00:29:36.665116 systemd-networkd[1497]: eth1: Gained IPv6LL Jan 14 00:29:36.666792 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 14 00:29:37.336335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:29:37.338597 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 00:29:37.341989 systemd[1]: Startup finished in 2.058s (kernel) + 5.707s (initrd) + 5.423s (userspace) = 13.189s. Jan 14 00:29:37.352884 (kubelet)[1726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:29:38.059046 kubelet[1726]: E0114 00:29:38.058993 1726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:29:38.062889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:29:38.063209 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:29:38.063934 systemd[1]: kubelet.service: Consumed 984ms CPU time, 259.5M memory peak. Jan 14 00:29:48.313562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 00:29:48.318143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:29:48.515480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:29:48.535244 (kubelet)[1745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:29:48.590435 kubelet[1745]: E0114 00:29:48.590299 1745 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:29:48.595698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:29:48.595877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:29:48.597046 systemd[1]: kubelet.service: Consumed 223ms CPU time, 105M memory peak. Jan 14 00:29:51.314978 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 00:29:51.319251 systemd[1]: Started sshd@0-91.99.0.249:22-4.153.228.146:48726.service - OpenSSH per-connection server daemon (4.153.228.146:48726). Jan 14 00:29:51.916117 sshd[1752]: Accepted publickey for core from 4.153.228.146 port 48726 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:29:51.920502 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:29:51.936673 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 00:29:51.941277 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 00:29:51.953919 systemd-logind[1580]: New session 1 of user core. Jan 14 00:29:51.985464 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 00:29:51.990058 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 00:29:52.018350 (systemd)[1758]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:29:52.022855 systemd-logind[1580]: New session 2 of user core. Jan 14 00:29:52.167904 systemd[1758]: Queued start job for default target default.target. Jan 14 00:29:52.191480 systemd[1758]: Created slice app.slice - User Application Slice. Jan 14 00:29:52.191907 systemd[1758]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 00:29:52.192124 systemd[1758]: Reached target paths.target - Paths. Jan 14 00:29:52.192228 systemd[1758]: Reached target timers.target - Timers. Jan 14 00:29:52.194309 systemd[1758]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 00:29:52.198104 systemd[1758]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 00:29:52.220362 systemd[1758]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 00:29:52.220452 systemd[1758]: Reached target sockets.target - Sockets. Jan 14 00:29:52.222741 systemd[1758]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 00:29:52.222862 systemd[1758]: Reached target basic.target - Basic System. Jan 14 00:29:52.222984 systemd[1758]: Reached target default.target - Main User Target. Jan 14 00:29:52.223015 systemd[1758]: Startup finished in 191ms. Jan 14 00:29:52.223413 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 00:29:52.229423 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 00:29:52.543133 systemd[1]: Started sshd@1-91.99.0.249:22-4.153.228.146:48732.service - OpenSSH per-connection server daemon (4.153.228.146:48732). Jan 14 00:29:53.089853 sshd[1772]: Accepted publickey for core from 4.153.228.146 port 48732 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:29:53.091736 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:29:53.098553 systemd-logind[1580]: New session 3 of user core. Jan 14 00:29:53.104290 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 00:29:53.226203 systemd[1]: Started sshd@2-91.99.0.249:22-5.187.35.21:42496.service - OpenSSH per-connection server daemon (5.187.35.21:42496). Jan 14 00:29:53.382016 sshd[1776]: Connection closed by 4.153.228.146 port 48732 Jan 14 00:29:53.381690 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Jan 14 00:29:53.388655 systemd-logind[1580]: Session 3 logged out. Waiting for processes to exit. Jan 14 00:29:53.388880 systemd[1]: sshd@1-91.99.0.249:22-4.153.228.146:48732.service: Deactivated successfully. Jan 14 00:29:53.390798 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 00:29:53.393466 systemd-logind[1580]: Removed session 3. Jan 14 00:29:53.496265 systemd[1]: Started sshd@3-91.99.0.249:22-4.153.228.146:48736.service - OpenSSH per-connection server daemon (4.153.228.146:48736). Jan 14 00:29:54.052993 sshd[1785]: Accepted publickey for core from 4.153.228.146 port 48736 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:29:54.058524 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:29:54.067686 systemd-logind[1580]: New session 4 of user core. Jan 14 00:29:54.079194 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 00:29:54.342624 sshd[1790]: Connection closed by 4.153.228.146 port 48736 Jan 14 00:29:54.342445 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Jan 14 00:29:54.350616 systemd[1]: sshd@3-91.99.0.249:22-4.153.228.146:48736.service: Deactivated successfully. Jan 14 00:29:54.352959 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 00:29:54.369418 systemd-logind[1580]: Session 4 logged out. Waiting for processes to exit. Jan 14 00:29:54.371315 systemd-logind[1580]: Removed session 4. Jan 14 00:29:54.461083 systemd[1]: Started sshd@4-91.99.0.249:22-4.153.228.146:48742.service - OpenSSH per-connection server daemon (4.153.228.146:48742). Jan 14 00:29:55.046368 sshd[1796]: Accepted publickey for core from 4.153.228.146 port 48742 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:29:55.049403 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:29:55.058443 systemd-logind[1580]: New session 5 of user core. Jan 14 00:29:55.064352 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 00:29:55.354958 sshd[1800]: Connection closed by 4.153.228.146 port 48742 Jan 14 00:29:55.354047 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Jan 14 00:29:55.361797 systemd[1]: sshd@4-91.99.0.249:22-4.153.228.146:48742.service: Deactivated successfully. Jan 14 00:29:55.368032 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 00:29:55.373908 systemd-logind[1580]: Session 5 logged out. Waiting for processes to exit. Jan 14 00:29:55.375945 systemd-logind[1580]: Removed session 5. Jan 14 00:29:55.466540 systemd[1]: Started sshd@5-91.99.0.249:22-4.153.228.146:50360.service - OpenSSH per-connection server daemon (4.153.228.146:50360). Jan 14 00:29:56.029711 sshd[1806]: Accepted publickey for core from 4.153.228.146 port 50360 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:29:56.032740 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:29:56.040326 systemd-logind[1580]: New session 6 of user core. Jan 14 00:29:56.046148 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 00:29:56.262499 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 00:29:56.263603 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:29:56.285205 sudo[1811]: pam_unix(sudo:session): session closed for user root Jan 14 00:29:56.388882 sshd[1810]: Connection closed by 4.153.228.146 port 50360 Jan 14 00:29:56.391175 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Jan 14 00:29:56.408164 systemd-logind[1580]: Session 6 logged out. Waiting for processes to exit. Jan 14 00:29:56.410721 systemd[1]: sshd@5-91.99.0.249:22-4.153.228.146:50360.service: Deactivated successfully. Jan 14 00:29:56.415677 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 00:29:56.422587 systemd-logind[1580]: Removed session 6. Jan 14 00:29:56.438984 sshd[1778]: Connection closed by authenticating user root 5.187.35.21 port 42496 [preauth] Jan 14 00:29:56.447260 systemd[1]: sshd@2-91.99.0.249:22-5.187.35.21:42496.service: Deactivated successfully. Jan 14 00:29:56.487521 systemd[1]: Started sshd@6-91.99.0.249:22-5.187.35.21:42538.service - OpenSSH per-connection server daemon (5.187.35.21:42538). Jan 14 00:29:56.518971 systemd[1]: Started sshd@7-91.99.0.249:22-4.153.228.146:50364.service - OpenSSH per-connection server daemon (4.153.228.146:50364). Jan 14 00:29:57.079065 sshd[1822]: Accepted publickey for core from 4.153.228.146 port 50364 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:29:57.082032 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:29:57.091161 systemd-logind[1580]: New session 7 of user core. Jan 14 00:29:57.096259 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 00:29:57.285407 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 00:29:57.285685 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:29:57.289478 sudo[1830]: pam_unix(sudo:session): session closed for user root Jan 14 00:29:57.300133 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 00:29:57.300432 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:29:57.310390 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 00:29:57.369044 kernel: kauditd_printk_skb: 38 callbacks suppressed Jan 14 00:29:57.369179 kernel: audit: type=1305 audit(1768350597.366:224): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 00:29:57.366000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 00:29:57.366000 audit[1854]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffce1f15b0 a2=420 a3=0 items=0 ppid=1835 pid=1854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:57.371897 augenrules[1854]: No rules Jan 14 00:29:57.372541 kernel: audit: type=1300 audit(1768350597.366:224): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffce1f15b0 a2=420 a3=0 items=0 ppid=1835 pid=1854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:57.373483 kernel: audit: type=1327 audit(1768350597.366:224): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:29:57.366000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:29:57.374104 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 00:29:57.374397 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 00:29:57.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:57.376417 kernel: audit: type=1130 audit(1768350597.373:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:57.379524 kernel: audit: type=1131 audit(1768350597.375:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:57.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:57.378070 sudo[1829]: pam_unix(sudo:session): session closed for user root Jan 14 00:29:57.377000 audit[1829]: USER_END pid=1829 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:29:57.379874 kernel: audit: type=1106 audit(1768350597.377:227): pid=1829 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:29:57.377000 audit[1829]: CRED_DISP pid=1829 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:29:57.383429 kernel: audit: type=1104 audit(1768350597.377:228): pid=1829 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:29:57.476620 sshd[1827]: Connection closed by 4.153.228.146 port 50364 Jan 14 00:29:57.477968 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Jan 14 00:29:57.481000 audit[1822]: USER_END pid=1822 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:29:57.481000 audit[1822]: CRED_DISP pid=1822 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:29:57.487432 kernel: audit: type=1106 audit(1768350597.481:229): pid=1822 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:29:57.487514 kernel: audit: type=1104 audit(1768350597.481:230): pid=1822 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:29:57.486793 systemd[1]: sshd@7-91.99.0.249:22-4.153.228.146:50364.service: Deactivated successfully. Jan 14 00:29:57.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-91.99.0.249:22-4.153.228.146:50364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:57.489855 kernel: audit: type=1131 audit(1768350597.485:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-91.99.0.249:22-4.153.228.146:50364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:57.487990 systemd-logind[1580]: Session 7 logged out. Waiting for processes to exit. Jan 14 00:29:57.489971 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 00:29:57.493572 systemd-logind[1580]: Removed session 7. Jan 14 00:29:57.584675 systemd[1]: Started sshd@8-91.99.0.249:22-4.153.228.146:50372.service - OpenSSH per-connection server daemon (4.153.228.146:50372). Jan 14 00:29:57.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-91.99.0.249:22-4.153.228.146:50372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:58.128000 audit[1863]: USER_ACCT pid=1863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:29:58.129895 sshd[1863]: Accepted publickey for core from 4.153.228.146 port 50372 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:29:58.130000 audit[1863]: CRED_ACQ pid=1863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:29:58.130000 audit[1863]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe5b88ac0 a2=3 a3=0 items=0 ppid=1 pid=1863 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:58.130000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:29:58.131894 sshd-session[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:29:58.140167 systemd-logind[1580]: New session 8 of user core. Jan 14 00:29:58.147058 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 00:29:58.151000 audit[1863]: USER_START pid=1863 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:29:58.154000 audit[1867]: CRED_ACQ pid=1867 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:29:58.331000 audit[1868]: USER_ACCT pid=1868 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:29:58.332000 audit[1868]: CRED_REFR pid=1868 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:29:58.332000 audit[1868]: USER_START pid=1868 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:29:58.333008 sudo[1868]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 00:29:58.333303 sudo[1868]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:29:58.683341 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 00:29:58.685501 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 00:29:58.689199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:29:58.702410 (dockerd)[1886]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 00:29:58.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:58.887256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:29:58.896236 (kubelet)[1899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:29:58.952008 kubelet[1899]: E0114 00:29:58.951331 1899 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:29:58.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 00:29:58.956214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:29:58.956356 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:29:58.957179 systemd[1]: kubelet.service: Consumed 190ms CPU time, 104.5M memory peak. Jan 14 00:29:58.994284 dockerd[1886]: time="2026-01-14T00:29:58.994121520Z" level=info msg="Starting up" Jan 14 00:29:59.000215 dockerd[1886]: time="2026-01-14T00:29:59.000138080Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 00:29:59.017673 dockerd[1886]: time="2026-01-14T00:29:59.017612400Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 00:29:59.072114 dockerd[1886]: time="2026-01-14T00:29:59.071581760Z" level=info msg="Loading containers: start." Jan 14 00:29:59.085855 kernel: Initializing XFRM netlink socket Jan 14 00:29:59.166000 audit[1951]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.166000 audit[1951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffffd9832b0 a2=0 a3=0 items=0 ppid=1886 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.166000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 00:29:59.173000 audit[1953]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.173000 audit[1953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe46be4b0 a2=0 a3=0 items=0 ppid=1886 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.173000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 00:29:59.177000 audit[1955]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.177000 audit[1955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe94cd4a0 a2=0 a3=0 items=0 ppid=1886 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.177000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 00:29:59.180000 audit[1957]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.180000 audit[1957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1135040 a2=0 a3=0 items=0 ppid=1886 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.180000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 00:29:59.182000 audit[1959]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.182000 audit[1959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffbc54000 a2=0 a3=0 items=0 ppid=1886 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.182000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 00:29:59.185000 audit[1961]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.185000 audit[1961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdca4cbf0 a2=0 a3=0 items=0 ppid=1886 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 00:29:59.187000 audit[1963]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.187000 audit[1963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdf9f8b50 a2=0 a3=0 items=0 ppid=1886 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 00:29:59.190000 audit[1965]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.190000 audit[1965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff9caef00 a2=0 a3=0 items=0 ppid=1886 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.190000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 00:29:59.220000 audit[1968]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.220000 audit[1968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=fffffec8e8b0 a2=0 a3=0 items=0 ppid=1886 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.220000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 00:29:59.225000 audit[1970]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.225000 audit[1970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffed450180 a2=0 a3=0 items=0 ppid=1886 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.225000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 00:29:59.229000 audit[1972]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.229000 audit[1972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffeff6acc0 a2=0 a3=0 items=0 ppid=1886 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.229000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 00:29:59.234000 audit[1974]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.234000 audit[1974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd4a15fa0 a2=0 a3=0 items=0 ppid=1886 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.234000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 00:29:59.238000 audit[1976]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.238000 audit[1976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffcc0fe4b0 a2=0 a3=0 items=0 ppid=1886 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.238000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 00:29:59.292000 audit[2006]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:29:59.292000 audit[2006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffef6801d0 a2=0 a3=0 items=0 ppid=1886 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.292000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 00:29:59.296000 audit[2008]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:29:59.296000 audit[2008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffcc2ff800 a2=0 a3=0 items=0 ppid=1886 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.296000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 00:29:59.299000 audit[2010]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:29:59.299000 audit[2010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff7643620 a2=0 a3=0 items=0 ppid=1886 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 00:29:59.301000 audit[2012]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:29:59.301000 audit[2012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc38ef820 a2=0 a3=0 items=0 ppid=1886 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.301000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 00:29:59.306000 audit[2014]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:29:59.306000 audit[2014]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffff613d40 a2=0 a3=0 items=0 ppid=1886 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.306000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 00:29:59.309000 audit[2016]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:29:59.309000 audit[2016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdbd755a0 a2=0 a3=0 items=0 ppid=1886 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.309000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 00:29:59.312000 audit[2018]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:29:59.312000 audit[2018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe4686660 a2=0 a3=0 items=0 ppid=1886 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.312000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 00:29:59.315000 audit[2020]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:29:59.315000 audit[2020]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffffc9de740 a2=0 a3=0 items=0 ppid=1886 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.315000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 00:29:59.319000 audit[2022]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:29:59.319000 audit[2022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffec221bc0 a2=0 a3=0 items=0 ppid=1886 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.319000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 00:29:59.322000 audit[2024]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:29:59.322000 audit[2024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffdec6ec00 a2=0 a3=0 items=0 ppid=1886 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.322000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 00:29:59.325000 audit[2026]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:29:59.325000 audit[2026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffee409860 a2=0 a3=0 items=0 ppid=1886 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.325000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 00:29:59.328000 audit[2028]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:29:59.328000 audit[2028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc49ee190 a2=0 a3=0 items=0 ppid=1886 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.328000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 00:29:59.330000 audit[2030]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:29:59.330000 audit[2030]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffe02a750 a2=0 a3=0 items=0 ppid=1886 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.330000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 00:29:59.337000 audit[2035]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.337000 audit[2035]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc3e8aea0 a2=0 a3=0 items=0 ppid=1886 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.337000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 00:29:59.341000 audit[2037]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.341000 audit[2037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffffcefa190 a2=0 a3=0 items=0 ppid=1886 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.341000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 00:29:59.345000 audit[2039]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.345000 audit[2039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffffd2be0 a2=0 a3=0 items=0 ppid=1886 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.345000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 00:29:59.347000 audit[2041]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:29:59.347000 audit[2041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff97afc80 a2=0 a3=0 items=0 ppid=1886 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.347000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 00:29:59.350000 audit[2043]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:29:59.350000 audit[2043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc99e4da0 a2=0 a3=0 items=0 ppid=1886 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.350000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 00:29:59.353000 audit[2045]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2045 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:29:59.353000 audit[2045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe67f6060 a2=0 a3=0 items=0 ppid=1886 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.353000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 00:29:59.364466 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 14 00:29:59.378000 audit[2049]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.378000 audit[2049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffffc958350 a2=0 a3=0 items=0 ppid=1886 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.378000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 00:29:59.380000 audit[2051]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.380000 audit[2051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd2230ce0 a2=0 a3=0 items=0 ppid=1886 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.380000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 00:29:59.396000 audit[2059]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.396000 audit[2059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffcbae0400 a2=0 a3=0 items=0 ppid=1886 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.396000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 00:29:59.408000 audit[2065]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.408000 audit[2065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffcc0f25d0 a2=0 a3=0 items=0 ppid=1886 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.408000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 00:29:59.411000 audit[2067]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.411000 audit[2067]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffff5ca3500 a2=0 a3=0 items=0 ppid=1886 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.411000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 00:29:59.415000 audit[2069]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.415000 audit[2069]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcef78590 a2=0 a3=0 items=0 ppid=1886 pid=2069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.415000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 00:29:59.418000 audit[2071]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.418000 audit[2071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd31afcf0 a2=0 a3=0 items=0 ppid=1886 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.418000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 00:29:59.425000 audit[2073]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:29:59.425000 audit[2073]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdd70b050 a2=0 a3=0 items=0 ppid=1886 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:29:59.425000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 00:29:59.427878 systemd-networkd[1497]: docker0: Link UP Jan 14 00:29:59.437832 dockerd[1886]: time="2026-01-14T00:29:59.437707760Z" level=info msg="Loading containers: done." Jan 14 00:29:59.466054 dockerd[1886]: time="2026-01-14T00:29:59.465939720Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 00:29:59.466446 dockerd[1886]: time="2026-01-14T00:29:59.466160040Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 00:29:59.466517 dockerd[1886]: time="2026-01-14T00:29:59.466437640Z" level=info msg="Initializing buildkit" Jan 14 00:29:59.497097 dockerd[1886]: time="2026-01-14T00:29:59.496742640Z" level=info msg="Completed buildkit initialization" Jan 14 00:29:59.503466 dockerd[1886]: time="2026-01-14T00:29:59.503381560Z" level=info msg="Daemon has completed initialization" Jan 14 00:29:59.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:59.505176 dockerd[1886]: time="2026-01-14T00:29:59.503452680Z" level=info msg="API listen on /run/docker.sock" Jan 14 00:29:59.504119 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 00:29:59.522906 systemd-timesyncd[1464]: Contacted time server 88.99.86.9:123 (2.flatcar.pool.ntp.org). Jan 14 00:29:59.523601 systemd-timesyncd[1464]: Initial clock synchronization to Wed 2026-01-14 00:29:59.477447 UTC. Jan 14 00:29:59.627845 sshd[1820]: Connection closed by authenticating user root 5.187.35.21 port 42538 [preauth] Jan 14 00:29:59.627000 audit[1820]: USER_ERR pid=1820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:29:59.631397 systemd[1]: sshd@6-91.99.0.249:22-5.187.35.21:42538.service: Deactivated successfully. Jan 14 00:29:59.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-91.99.0.249:22-5.187.35.21:42538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:29:59.659177 systemd[1]: Started sshd@9-91.99.0.249:22-5.187.35.21:42552.service - OpenSSH per-connection server daemon (5.187.35.21:42552). Jan 14 00:29:59.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-91.99.0.249:22-5.187.35.21:42552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:00.754085 containerd[1612]: time="2026-01-14T00:30:00.753998867Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 14 00:30:01.738285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4010439475.mount: Deactivated successfully. Jan 14 00:30:02.542163 containerd[1612]: time="2026-01-14T00:30:02.541918204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:02.545130 containerd[1612]: time="2026-01-14T00:30:02.545037923Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25791161" Jan 14 00:30:02.546961 containerd[1612]: time="2026-01-14T00:30:02.546895851Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:02.553868 containerd[1612]: time="2026-01-14T00:30:02.552183080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:02.554937 containerd[1612]: time="2026-01-14T00:30:02.554872186Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.800811962s" Jan 14 00:30:02.555200 containerd[1612]: time="2026-01-14T00:30:02.555156309Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 14 00:30:02.557212 containerd[1612]: time="2026-01-14T00:30:02.557150752Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 14 00:30:03.000335 sshd[2116]: Connection closed by authenticating user root 5.187.35.21 port 42552 [preauth] Jan 14 00:30:03.000000 audit[2116]: USER_ERR pid=2116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:03.006995 kernel: kauditd_printk_skb: 137 callbacks suppressed Jan 14 00:30:03.007141 kernel: audit: type=1109 audit(1768350603.000:287): pid=2116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:03.009593 systemd[1]: sshd@9-91.99.0.249:22-5.187.35.21:42552.service: Deactivated successfully. Jan 14 00:30:03.012746 kernel: audit: type=1131 audit(1768350603.009:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-91.99.0.249:22-5.187.35.21:42552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:03.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-91.99.0.249:22-5.187.35.21:42552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:03.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-91.99.0.249:22-5.187.35.21:25608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:03.036666 systemd[1]: Started sshd@10-91.99.0.249:22-5.187.35.21:25608.service - OpenSSH per-connection server daemon (5.187.35.21:25608). Jan 14 00:30:03.039877 kernel: audit: type=1130 audit(1768350603.035:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-91.99.0.249:22-5.187.35.21:25608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:04.310543 containerd[1612]: time="2026-01-14T00:30:04.310465870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:04.312891 containerd[1612]: time="2026-01-14T00:30:04.312782375Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Jan 14 00:30:04.313931 containerd[1612]: time="2026-01-14T00:30:04.313833612Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:04.319795 containerd[1612]: time="2026-01-14T00:30:04.319704428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:04.321608 containerd[1612]: time="2026-01-14T00:30:04.321151196Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.763939986s" Jan 14 00:30:04.321608 containerd[1612]: time="2026-01-14T00:30:04.321230670Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 14 00:30:04.321831 containerd[1612]: time="2026-01-14T00:30:04.321781795Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 14 00:30:05.737871 containerd[1612]: time="2026-01-14T00:30:05.737087889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:05.739747 containerd[1612]: time="2026-01-14T00:30:05.739643190Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18289931" Jan 14 00:30:05.742032 containerd[1612]: time="2026-01-14T00:30:05.741921314Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:05.748838 containerd[1612]: time="2026-01-14T00:30:05.748742594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:05.749896 containerd[1612]: time="2026-01-14T00:30:05.749833491Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.42797601s" Jan 14 00:30:05.749896 containerd[1612]: time="2026-01-14T00:30:05.749892375Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 14 00:30:05.751098 containerd[1612]: time="2026-01-14T00:30:05.750543536Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 14 00:30:06.293745 sshd[2180]: Connection closed by authenticating user root 5.187.35.21 port 25608 [preauth] Jan 14 00:30:06.294000 audit[2180]: USER_ERR pid=2180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:06.297831 kernel: audit: type=1109 audit(1768350606.294:290): pid=2180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:06.299457 systemd[1]: sshd@10-91.99.0.249:22-5.187.35.21:25608.service: Deactivated successfully. Jan 14 00:30:06.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-91.99.0.249:22-5.187.35.21:25608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:06.304871 kernel: audit: type=1131 audit(1768350606.299:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-91.99.0.249:22-5.187.35.21:25608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:06.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-91.99.0.249:22-5.187.35.21:25630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:06.329128 systemd[1]: Started sshd@11-91.99.0.249:22-5.187.35.21:25630.service - OpenSSH per-connection server daemon (5.187.35.21:25630). Jan 14 00:30:06.333992 kernel: audit: type=1130 audit(1768350606.328:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-91.99.0.249:22-5.187.35.21:25630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:06.906272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3105992960.mount: Deactivated successfully. Jan 14 00:30:07.419028 containerd[1612]: time="2026-01-14T00:30:07.418940139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:07.421763 containerd[1612]: time="2026-01-14T00:30:07.421646827Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28255594" Jan 14 00:30:07.422940 containerd[1612]: time="2026-01-14T00:30:07.422880617Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:07.426479 containerd[1612]: time="2026-01-14T00:30:07.426339566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:07.427704 containerd[1612]: time="2026-01-14T00:30:07.427640441Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.677045044s" Jan 14 00:30:07.427704 containerd[1612]: time="2026-01-14T00:30:07.427694427Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 14 00:30:07.428324 containerd[1612]: time="2026-01-14T00:30:07.428302338Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 14 00:30:08.240481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount152681230.mount: Deactivated successfully. Jan 14 00:30:09.005022 containerd[1612]: time="2026-01-14T00:30:09.004663774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:09.007400 containerd[1612]: time="2026-01-14T00:30:09.006432292Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18888418" Jan 14 00:30:09.009682 containerd[1612]: time="2026-01-14T00:30:09.009628923Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:09.016135 containerd[1612]: time="2026-01-14T00:30:09.016016154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:09.020844 containerd[1612]: time="2026-01-14T00:30:09.020620850Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.592273705s" Jan 14 00:30:09.020844 containerd[1612]: time="2026-01-14T00:30:09.020688587Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 14 00:30:09.021655 containerd[1612]: time="2026-01-14T00:30:09.021585467Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 00:30:09.028234 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 00:30:09.031754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:30:09.223890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:30:09.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:09.226892 kernel: audit: type=1130 audit(1768350609.223:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:09.244952 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:30:09.297909 kubelet[2266]: E0114 00:30:09.297833 2266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:30:09.301744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:30:09.302033 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:30:09.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 00:30:09.304340 systemd[1]: kubelet.service: Consumed 197ms CPU time, 105M memory peak. Jan 14 00:30:09.307914 kernel: audit: type=1131 audit(1768350609.303:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 00:30:09.627545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524803815.mount: Deactivated successfully. Jan 14 00:30:09.641852 containerd[1612]: time="2026-01-14T00:30:09.641219567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:30:09.643538 containerd[1612]: time="2026-01-14T00:30:09.643474666Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 00:30:09.645725 containerd[1612]: time="2026-01-14T00:30:09.645658474Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:30:09.651838 containerd[1612]: time="2026-01-14T00:30:09.650846684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:30:09.652349 containerd[1612]: time="2026-01-14T00:30:09.652310184Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 630.665927ms" Jan 14 00:30:09.652514 containerd[1612]: time="2026-01-14T00:30:09.652494584Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 14 00:30:09.655054 containerd[1612]: time="2026-01-14T00:30:09.655024866Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 14 00:30:09.677119 sshd[2194]: Connection closed by authenticating user root 5.187.35.21 port 25630 [preauth] Jan 14 00:30:09.677000 audit[2194]: USER_ERR pid=2194 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:09.685863 kernel: audit: type=1109 audit(1768350609.677:295): pid=2194 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:09.693204 systemd[1]: sshd@11-91.99.0.249:22-5.187.35.21:25630.service: Deactivated successfully. Jan 14 00:30:09.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-91.99.0.249:22-5.187.35.21:25630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:09.708908 kernel: audit: type=1131 audit(1768350609.695:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-91.99.0.249:22-5.187.35.21:25630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:09.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-91.99.0.249:22-5.187.35.21:25666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:09.731376 systemd[1]: Started sshd@12-91.99.0.249:22-5.187.35.21:25666.service - OpenSSH per-connection server daemon (5.187.35.21:25666). Jan 14 00:30:09.740879 kernel: audit: type=1130 audit(1768350609.730:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-91.99.0.249:22-5.187.35.21:25666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:10.301657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1725005167.mount: Deactivated successfully. Jan 14 00:30:12.701292 containerd[1612]: time="2026-01-14T00:30:12.700274426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:12.703363 containerd[1612]: time="2026-01-14T00:30:12.703284704Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57926377" Jan 14 00:30:12.706239 containerd[1612]: time="2026-01-14T00:30:12.706182723Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:12.716467 containerd[1612]: time="2026-01-14T00:30:12.716404111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:12.718559 containerd[1612]: time="2026-01-14T00:30:12.717660422Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.062444601s" Jan 14 00:30:12.718559 containerd[1612]: time="2026-01-14T00:30:12.717702289Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 14 00:30:13.225968 sshd[2280]: Connection closed by authenticating user root 5.187.35.21 port 25666 [preauth] Jan 14 00:30:13.225000 audit[2280]: USER_ERR pid=2280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:13.236105 kernel: audit: type=1109 audit(1768350613.225:298): pid=2280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:13.237623 systemd[1]: sshd@12-91.99.0.249:22-5.187.35.21:25666.service: Deactivated successfully. Jan 14 00:30:13.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-91.99.0.249:22-5.187.35.21:25666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:13.250090 kernel: audit: type=1131 audit(1768350613.238:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-91.99.0.249:22-5.187.35.21:25666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:13.274197 systemd[1]: Started sshd@13-91.99.0.249:22-5.187.35.21:16854.service - OpenSSH per-connection server daemon (5.187.35.21:16854). Jan 14 00:30:13.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-91.99.0.249:22-5.187.35.21:16854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:13.282939 kernel: audit: type=1130 audit(1768350613.274:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-91.99.0.249:22-5.187.35.21:16854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:16.581170 sshd[2347]: Connection closed by authenticating user root 5.187.35.21 port 16854 [preauth] Jan 14 00:30:16.577000 audit[2347]: USER_ERR pid=2347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:16.593999 kernel: audit: type=1109 audit(1768350616.577:301): pid=2347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:16.594151 kernel: audit: type=1131 audit(1768350616.589:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-91.99.0.249:22-5.187.35.21:16854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:16.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-91.99.0.249:22-5.187.35.21:16854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:16.590112 systemd[1]: sshd@13-91.99.0.249:22-5.187.35.21:16854.service: Deactivated successfully. Jan 14 00:30:16.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-91.99.0.249:22-5.187.35.21:16904 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:16.621359 systemd[1]: Started sshd@14-91.99.0.249:22-5.187.35.21:16904.service - OpenSSH per-connection server daemon (5.187.35.21:16904). Jan 14 00:30:16.625881 kernel: audit: type=1130 audit(1768350616.620:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-91.99.0.249:22-5.187.35.21:16904 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:19.226111 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:30:19.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:19.227019 systemd[1]: kubelet.service: Consumed 197ms CPU time, 105M memory peak. Jan 14 00:30:19.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:19.232841 kernel: audit: type=1130 audit(1768350619.226:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:19.232956 kernel: audit: type=1131 audit(1768350619.226:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:19.239879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:30:19.268220 systemd[1]: Reload requested from client PID 2378 ('systemctl') (unit session-8.scope)... Jan 14 00:30:19.268244 systemd[1]: Reloading... Jan 14 00:30:19.435838 zram_generator::config[2429]: No configuration found. Jan 14 00:30:19.674526 systemd[1]: Reloading finished in 405 ms. Jan 14 00:30:19.694000 audit: BPF prog-id=61 op=LOAD Jan 14 00:30:19.694000 audit: BPF prog-id=57 op=UNLOAD Jan 14 00:30:19.697884 kernel: audit: type=1334 audit(1768350619.694:306): prog-id=61 op=LOAD Jan 14 00:30:19.697940 kernel: audit: type=1334 audit(1768350619.694:307): prog-id=57 op=UNLOAD Jan 14 00:30:19.697962 kernel: audit: type=1334 audit(1768350619.694:308): prog-id=62 op=LOAD Jan 14 00:30:19.694000 audit: BPF prog-id=62 op=LOAD Jan 14 00:30:19.694000 audit: BPF prog-id=63 op=LOAD Jan 14 00:30:19.698835 kernel: audit: type=1334 audit(1768350619.694:309): prog-id=63 op=LOAD Jan 14 00:30:19.694000 audit: BPF prog-id=54 op=UNLOAD Jan 14 00:30:19.704631 kernel: audit: type=1334 audit(1768350619.694:310): prog-id=54 op=UNLOAD Jan 14 00:30:19.694000 audit: BPF prog-id=55 op=UNLOAD Jan 14 00:30:19.699000 audit: BPF prog-id=64 op=LOAD Jan 14 00:30:19.699000 audit: BPF prog-id=51 op=UNLOAD Jan 14 00:30:19.699000 audit: BPF prog-id=65 op=LOAD Jan 14 00:30:19.699000 audit: BPF prog-id=66 op=LOAD Jan 14 00:30:19.699000 audit: BPF prog-id=52 op=UNLOAD Jan 14 00:30:19.699000 audit: BPF prog-id=53 op=UNLOAD Jan 14 00:30:19.700000 audit: BPF prog-id=67 op=LOAD Jan 14 00:30:19.700000 audit: BPF prog-id=56 op=UNLOAD Jan 14 00:30:19.704000 audit: BPF prog-id=68 op=LOAD Jan 14 00:30:19.704000 audit: BPF prog-id=48 op=UNLOAD Jan 14 00:30:19.704000 audit: BPF prog-id=69 op=LOAD Jan 14 00:30:19.704000 audit: BPF prog-id=70 op=LOAD Jan 14 00:30:19.705000 audit: BPF prog-id=49 op=UNLOAD Jan 14 00:30:19.705000 audit: BPF prog-id=50 op=UNLOAD Jan 14 00:30:19.708000 audit: BPF prog-id=71 op=LOAD Jan 14 00:30:19.708000 audit: BPF prog-id=42 op=UNLOAD Jan 14 00:30:19.708000 audit: BPF prog-id=72 op=LOAD Jan 14 00:30:19.708000 audit: BPF prog-id=73 op=LOAD Jan 14 00:30:19.708000 audit: BPF prog-id=43 op=UNLOAD Jan 14 00:30:19.708000 audit: BPF prog-id=44 op=UNLOAD Jan 14 00:30:19.710000 audit: BPF prog-id=74 op=LOAD Jan 14 00:30:19.710000 audit: BPF prog-id=41 op=UNLOAD Jan 14 00:30:19.710000 audit: BPF prog-id=75 op=LOAD Jan 14 00:30:19.710000 audit: BPF prog-id=45 op=UNLOAD Jan 14 00:30:19.711000 audit: BPF prog-id=76 op=LOAD Jan 14 00:30:19.711000 audit: BPF prog-id=77 op=LOAD Jan 14 00:30:19.711000 audit: BPF prog-id=46 op=UNLOAD Jan 14 00:30:19.711000 audit: BPF prog-id=47 op=UNLOAD Jan 14 00:30:19.712000 audit: BPF prog-id=78 op=LOAD Jan 14 00:30:19.721000 audit: BPF prog-id=58 op=UNLOAD Jan 14 00:30:19.721000 audit: BPF prog-id=79 op=LOAD Jan 14 00:30:19.721000 audit: BPF prog-id=80 op=LOAD Jan 14 00:30:19.721000 audit: BPF prog-id=59 op=UNLOAD Jan 14 00:30:19.721000 audit: BPF prog-id=60 op=UNLOAD Jan 14 00:30:19.740878 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 00:30:19.740996 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 00:30:19.741419 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:30:19.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 00:30:19.742882 systemd[1]: kubelet.service: Consumed 133ms CPU time, 95.1M memory peak. Jan 14 00:30:19.745347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:30:19.947230 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:30:19.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:19.964455 (kubelet)[2475]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 00:30:20.063368 kubelet[2475]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:30:20.063368 kubelet[2475]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 00:30:20.063368 kubelet[2475]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:30:20.064236 kubelet[2475]: I0114 00:30:20.063506 2475 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 00:30:20.229428 sshd[2369]: Connection closed by authenticating user root 5.187.35.21 port 16904 [preauth] Jan 14 00:30:20.230000 audit[2369]: USER_ERR pid=2369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:20.235755 systemd[1]: sshd@14-91.99.0.249:22-5.187.35.21:16904.service: Deactivated successfully. Jan 14 00:30:20.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-91.99.0.249:22-5.187.35.21:16904 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:20.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-91.99.0.249:22-5.187.35.21:16934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:20.264287 systemd[1]: Started sshd@15-91.99.0.249:22-5.187.35.21:16934.service - OpenSSH per-connection server daemon (5.187.35.21:16934). Jan 14 00:30:21.176947 update_engine[1587]: I20260114 00:30:21.175843 1587 update_attempter.cc:509] Updating boot flags... Jan 14 00:30:21.948408 kubelet[2475]: I0114 00:30:21.948068 2475 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 00:30:21.948408 kubelet[2475]: I0114 00:30:21.948116 2475 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 00:30:21.949083 kubelet[2475]: I0114 00:30:21.949060 2475 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 00:30:21.994221 kubelet[2475]: I0114 00:30:21.994174 2475 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 00:30:21.998506 kubelet[2475]: E0114 00:30:21.997967 2475 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://91.99.0.249:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.0.249:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 00:30:22.014884 kubelet[2475]: I0114 00:30:22.013874 2475 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 00:30:22.018374 kubelet[2475]: I0114 00:30:22.018334 2475 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 00:30:22.018945 kubelet[2475]: I0114 00:30:22.018903 2475 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 00:30:22.019235 kubelet[2475]: I0114 00:30:22.019066 2475 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-n-a43761813d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 00:30:22.019488 kubelet[2475]: I0114 00:30:22.019473 2475 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 00:30:22.019543 kubelet[2475]: I0114 00:30:22.019535 2475 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 00:30:22.019910 kubelet[2475]: I0114 00:30:22.019891 2475 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:30:22.028841 kubelet[2475]: I0114 00:30:22.027904 2475 kubelet.go:480] "Attempting to sync node with API server" Jan 14 00:30:22.028841 kubelet[2475]: I0114 00:30:22.027957 2475 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 00:30:22.028841 kubelet[2475]: I0114 00:30:22.028011 2475 kubelet.go:386] "Adding apiserver pod source" Jan 14 00:30:22.028841 kubelet[2475]: I0114 00:30:22.028029 2475 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 00:30:22.037466 kubelet[2475]: E0114 00:30:22.037424 2475 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://91.99.0.249:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.0.249:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 00:30:22.038145 kubelet[2475]: I0114 00:30:22.038122 2475 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 00:30:22.043853 kubelet[2475]: I0114 00:30:22.043765 2475 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 00:30:22.044045 kubelet[2475]: W0114 00:30:22.043982 2475 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 00:30:22.048530 kubelet[2475]: I0114 00:30:22.048495 2475 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 00:30:22.048661 kubelet[2475]: I0114 00:30:22.048609 2475 server.go:1289] "Started kubelet" Jan 14 00:30:22.052593 kubelet[2475]: I0114 00:30:22.052495 2475 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 00:30:22.056912 kubelet[2475]: I0114 00:30:22.055733 2475 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 00:30:22.056912 kubelet[2475]: E0114 00:30:22.056395 2475 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://91.99.0.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-0-0-n-a43761813d&limit=500&resourceVersion=0\": dial tcp 91.99.0.249:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 00:30:22.057696 kubelet[2475]: I0114 00:30:22.057188 2475 server.go:317] "Adding debug handlers to kubelet server" Jan 14 00:30:22.062521 kubelet[2475]: E0114 00:30:22.061049 2475 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.0.249:6443/api/v1/namespaces/default/events\": dial tcp 91.99.0.249:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547-0-0-n-a43761813d.188a7185898aff86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547-0-0-n-a43761813d,UID:ci-4547-0-0-n-a43761813d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547-0-0-n-a43761813d,},FirstTimestamp:2026-01-14 00:30:22.048518022 +0000 UTC m=+2.074289396,LastTimestamp:2026-01-14 00:30:22.048518022 +0000 UTC m=+2.074289396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-n-a43761813d,}" Jan 14 00:30:22.064354 kubelet[2475]: I0114 00:30:22.063678 2475 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 00:30:22.064354 kubelet[2475]: I0114 00:30:22.064086 2475 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 00:30:22.064354 kubelet[2475]: I0114 00:30:22.064171 2475 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 00:30:22.064615 kubelet[2475]: E0114 00:30:22.064451 2475 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-n-a43761813d\" not found" Jan 14 00:30:22.067038 kubelet[2475]: I0114 00:30:22.067004 2475 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 00:30:22.068781 kubelet[2475]: E0114 00:30:22.068707 2475 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.0.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-a43761813d?timeout=10s\": dial tcp 91.99.0.249:6443: connect: connection refused" interval="200ms" Jan 14 00:30:22.069501 kubelet[2475]: I0114 00:30:22.069476 2475 factory.go:223] Registration of the systemd container factory successfully Jan 14 00:30:22.069730 kubelet[2475]: I0114 00:30:22.069705 2475 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 00:30:22.075531 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 14 00:30:22.075691 kernel: audit: type=1325 audit(1768350622.070:351): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:22.075722 kernel: audit: type=1300 audit(1768350622.070:351): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe5d0f9e0 a2=0 a3=0 items=0 ppid=2475 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.070000 audit[2517]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:22.070000 audit[2517]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe5d0f9e0 a2=0 a3=0 items=0 ppid=2475 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.075931 kubelet[2475]: I0114 00:30:22.074071 2475 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 00:30:22.075931 kubelet[2475]: I0114 00:30:22.074147 2475 reconciler.go:26] "Reconciler: start to sync state" Jan 14 00:30:22.081076 kernel: audit: type=1327 audit(1768350622.070:351): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 00:30:22.081228 kernel: audit: type=1325 audit(1768350622.070:352): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:22.081261 kernel: audit: type=1300 audit(1768350622.070:352): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdac09960 a2=0 a3=0 items=0 ppid=2475 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.070000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 00:30:22.070000 audit[2518]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:22.070000 audit[2518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdac09960 a2=0 a3=0 items=0 ppid=2475 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.082483 kubelet[2475]: E0114 00:30:22.082264 2475 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://91.99.0.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.0.249:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 00:30:22.082709 kubelet[2475]: I0114 00:30:22.082565 2475 factory.go:223] Registration of the containerd container factory successfully Jan 14 00:30:22.070000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 00:30:22.085944 kernel: audit: type=1327 audit(1768350622.070:352): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 00:30:22.086044 kernel: audit: type=1325 audit(1768350622.075:353): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:22.075000 audit[2520]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:22.075000 audit[2520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd930f400 a2=0 a3=0 items=0 ppid=2475 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.091896 kernel: audit: type=1300 audit(1768350622.075:353): arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd930f400 a2=0 a3=0 items=0 ppid=2475 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.092018 kernel: audit: type=1327 audit(1768350622.075:353): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:30:22.092039 kernel: audit: type=1325 audit(1768350622.077:354): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:22.075000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:30:22.077000 audit[2522]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:22.077000 audit[2522]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffe94a870 a2=0 a3=0 items=0 ppid=2475 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.077000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:30:22.092000 audit[2525]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:22.092000 audit[2525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff57ff3a0 a2=0 a3=0 items=0 ppid=2475 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.092000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 00:30:22.094630 kubelet[2475]: I0114 00:30:22.094356 2475 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 00:30:22.096000 audit[2526]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:22.096000 audit[2526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd616dba0 a2=0 a3=0 items=0 ppid=2475 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.096000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 00:30:22.097730 kubelet[2475]: I0114 00:30:22.097702 2475 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 00:30:22.097871 kubelet[2475]: I0114 00:30:22.097858 2475 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 00:30:22.097972 kubelet[2475]: I0114 00:30:22.097959 2475 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 00:30:22.098391 kubelet[2475]: I0114 00:30:22.098019 2475 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 00:30:22.098391 kubelet[2475]: E0114 00:30:22.098076 2475 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 00:30:22.098000 audit[2527]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:22.098000 audit[2527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe8ec3600 a2=0 a3=0 items=0 ppid=2475 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.098000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 00:30:22.099000 audit[2528]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:22.099000 audit[2528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4deb060 a2=0 a3=0 items=0 ppid=2475 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.099000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 00:30:22.102000 audit[2529]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:22.102000 audit[2529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd2da7150 a2=0 a3=0 items=0 ppid=2475 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.102000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 00:30:22.104000 audit[2530]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2530 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:22.104000 audit[2530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff5619910 a2=0 a3=0 items=0 ppid=2475 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 00:30:22.106000 audit[2531]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:22.106000 audit[2531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6c5a6a0 a2=0 a3=0 items=0 ppid=2475 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.106000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 00:30:22.108000 audit[2532]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2532 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:22.108000 audit[2532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc9131010 a2=0 a3=0 items=0 ppid=2475 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.108000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 00:30:22.110314 kubelet[2475]: E0114 00:30:22.110278 2475 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 00:30:22.110649 kubelet[2475]: E0114 00:30:22.110606 2475 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://91.99.0.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.0.249:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 00:30:22.123628 kubelet[2475]: I0114 00:30:22.123585 2475 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 00:30:22.123973 kubelet[2475]: I0114 00:30:22.123955 2475 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 00:30:22.124529 kubelet[2475]: I0114 00:30:22.124111 2475 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:30:22.129678 kubelet[2475]: I0114 00:30:22.129634 2475 policy_none.go:49] "None policy: Start" Jan 14 00:30:22.129904 kubelet[2475]: I0114 00:30:22.129886 2475 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 00:30:22.130045 kubelet[2475]: I0114 00:30:22.130027 2475 state_mem.go:35] "Initializing new in-memory state store" Jan 14 00:30:22.144054 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 00:30:22.164745 kubelet[2475]: E0114 00:30:22.164661 2475 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-n-a43761813d\" not found" Jan 14 00:30:22.165144 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 00:30:22.170949 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 00:30:22.182833 kubelet[2475]: E0114 00:30:22.182096 2475 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 00:30:22.182833 kubelet[2475]: I0114 00:30:22.182394 2475 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 00:30:22.182833 kubelet[2475]: I0114 00:30:22.182411 2475 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 00:30:22.182833 kubelet[2475]: I0114 00:30:22.182763 2475 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 00:30:22.187686 kubelet[2475]: E0114 00:30:22.187604 2475 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 00:30:22.187686 kubelet[2475]: E0114 00:30:22.187667 2475 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547-0-0-n-a43761813d\" not found" Jan 14 00:30:22.217050 systemd[1]: Created slice kubepods-burstable-podf6aaadd855ee6529b0c772ffb0c49199.slice - libcontainer container kubepods-burstable-podf6aaadd855ee6529b0c772ffb0c49199.slice. Jan 14 00:30:22.226839 kubelet[2475]: E0114 00:30:22.226662 2475 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-a43761813d\" not found" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.232261 systemd[1]: Created slice kubepods-burstable-pod7f8060305b76c87a1ad5ff6687a656f9.slice - libcontainer container kubepods-burstable-pod7f8060305b76c87a1ad5ff6687a656f9.slice. Jan 14 00:30:22.248961 kubelet[2475]: E0114 00:30:22.248560 2475 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-a43761813d\" not found" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.252338 systemd[1]: Created slice kubepods-burstable-pod32e3d1a4cdc9f2ec88fc303b5185b27a.slice - libcontainer container kubepods-burstable-pod32e3d1a4cdc9f2ec88fc303b5185b27a.slice. Jan 14 00:30:22.255162 kubelet[2475]: E0114 00:30:22.255103 2475 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-a43761813d\" not found" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.270047 kubelet[2475]: E0114 00:30:22.269966 2475 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.0.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-a43761813d?timeout=10s\": dial tcp 91.99.0.249:6443: connect: connection refused" interval="400ms" Jan 14 00:30:22.275691 kubelet[2475]: I0114 00:30:22.275256 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6aaadd855ee6529b0c772ffb0c49199-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-n-a43761813d\" (UID: \"f6aaadd855ee6529b0c772ffb0c49199\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.275691 kubelet[2475]: I0114 00:30:22.275321 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6aaadd855ee6529b0c772ffb0c49199-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-n-a43761813d\" (UID: \"f6aaadd855ee6529b0c772ffb0c49199\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.275691 kubelet[2475]: I0114 00:30:22.275354 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f8060305b76c87a1ad5ff6687a656f9-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-a43761813d\" (UID: \"7f8060305b76c87a1ad5ff6687a656f9\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.275691 kubelet[2475]: I0114 00:30:22.275383 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7f8060305b76c87a1ad5ff6687a656f9-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-n-a43761813d\" (UID: \"7f8060305b76c87a1ad5ff6687a656f9\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.275691 kubelet[2475]: I0114 00:30:22.275437 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f8060305b76c87a1ad5ff6687a656f9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-n-a43761813d\" (UID: \"7f8060305b76c87a1ad5ff6687a656f9\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.276091 kubelet[2475]: I0114 00:30:22.275466 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32e3d1a4cdc9f2ec88fc303b5185b27a-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-n-a43761813d\" (UID: \"32e3d1a4cdc9f2ec88fc303b5185b27a\") " pod="kube-system/kube-scheduler-ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.276091 kubelet[2475]: I0114 00:30:22.275493 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6aaadd855ee6529b0c772ffb0c49199-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-n-a43761813d\" (UID: \"f6aaadd855ee6529b0c772ffb0c49199\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.276091 kubelet[2475]: I0114 00:30:22.275516 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f8060305b76c87a1ad5ff6687a656f9-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-a43761813d\" (UID: \"7f8060305b76c87a1ad5ff6687a656f9\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.276091 kubelet[2475]: I0114 00:30:22.275543 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f8060305b76c87a1ad5ff6687a656f9-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-n-a43761813d\" (UID: \"7f8060305b76c87a1ad5ff6687a656f9\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.287661 kubelet[2475]: I0114 00:30:22.287185 2475 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.289075 kubelet[2475]: E0114 00:30:22.288709 2475 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.0.249:6443/api/v1/nodes\": dial tcp 91.99.0.249:6443: connect: connection refused" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.494318 kubelet[2475]: I0114 00:30:22.494098 2475 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.495212 kubelet[2475]: E0114 00:30:22.495145 2475 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.0.249:6443/api/v1/nodes\": dial tcp 91.99.0.249:6443: connect: connection refused" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.529924 containerd[1612]: time="2026-01-14T00:30:22.529735535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-n-a43761813d,Uid:f6aaadd855ee6529b0c772ffb0c49199,Namespace:kube-system,Attempt:0,}" Jan 14 00:30:22.550733 containerd[1612]: time="2026-01-14T00:30:22.550178706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-n-a43761813d,Uid:7f8060305b76c87a1ad5ff6687a656f9,Namespace:kube-system,Attempt:0,}" Jan 14 00:30:22.557331 containerd[1612]: time="2026-01-14T00:30:22.557237323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-n-a43761813d,Uid:32e3d1a4cdc9f2ec88fc303b5185b27a,Namespace:kube-system,Attempt:0,}" Jan 14 00:30:22.580463 containerd[1612]: time="2026-01-14T00:30:22.579683822Z" level=info msg="connecting to shim 2ca2f0c7b10fd2a0e61d9a94d159a99ad21654dc1e0b36ada49c2f1f3c371fec" address="unix:///run/containerd/s/9c504c6edac1528ba473e823368df47d10899a89595ec15f1c3f0738d1227e17" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:30:22.600936 containerd[1612]: time="2026-01-14T00:30:22.597536370Z" level=info msg="connecting to shim d10aba203b91f3066d1ae28a0cb630b12ad3e182c90f4a3bb4ce2ee54ddb7708" address="unix:///run/containerd/s/fa4c3e249692259f9c4cc8a02af2d6e0e8eb4e3cab9ebfb95c96c41c1cc73ba2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:30:22.639239 systemd[1]: Started cri-containerd-2ca2f0c7b10fd2a0e61d9a94d159a99ad21654dc1e0b36ada49c2f1f3c371fec.scope - libcontainer container 2ca2f0c7b10fd2a0e61d9a94d159a99ad21654dc1e0b36ada49c2f1f3c371fec. Jan 14 00:30:22.646524 containerd[1612]: time="2026-01-14T00:30:22.646143735Z" level=info msg="connecting to shim 5ebb241df3261143e8f8355e8a7946001cc8b878bcac61e914af404984d09873" address="unix:///run/containerd/s/d01a5a656ffd85306da3b630deea66c89c28219eab10d276facad698527fb8f7" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:30:22.662209 systemd[1]: Started cri-containerd-d10aba203b91f3066d1ae28a0cb630b12ad3e182c90f4a3bb4ce2ee54ddb7708.scope - libcontainer container d10aba203b91f3066d1ae28a0cb630b12ad3e182c90f4a3bb4ce2ee54ddb7708. Jan 14 00:30:22.670948 kubelet[2475]: E0114 00:30:22.670904 2475 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.0.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-a43761813d?timeout=10s\": dial tcp 91.99.0.249:6443: connect: connection refused" interval="800ms" Jan 14 00:30:22.673000 audit: BPF prog-id=81 op=LOAD Jan 14 00:30:22.673000 audit: BPF prog-id=82 op=LOAD Jan 14 00:30:22.673000 audit[2569]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=2544 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263613266306337623130666432613065363164396139346431353961 Jan 14 00:30:22.673000 audit: BPF prog-id=82 op=UNLOAD Jan 14 00:30:22.673000 audit[2569]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2544 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263613266306337623130666432613065363164396139346431353961 Jan 14 00:30:22.674000 audit: BPF prog-id=83 op=LOAD Jan 14 00:30:22.674000 audit[2569]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=2544 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263613266306337623130666432613065363164396139346431353961 Jan 14 00:30:22.674000 audit: BPF prog-id=84 op=LOAD Jan 14 00:30:22.674000 audit[2569]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=2544 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263613266306337623130666432613065363164396139346431353961 Jan 14 00:30:22.674000 audit: BPF prog-id=84 op=UNLOAD Jan 14 00:30:22.674000 audit[2569]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2544 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263613266306337623130666432613065363164396139346431353961 Jan 14 00:30:22.674000 audit: BPF prog-id=83 op=UNLOAD Jan 14 00:30:22.674000 audit[2569]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2544 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263613266306337623130666432613065363164396139346431353961 Jan 14 00:30:22.674000 audit: BPF prog-id=85 op=LOAD Jan 14 00:30:22.674000 audit[2569]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=2544 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263613266306337623130666432613065363164396139346431353961 Jan 14 00:30:22.684000 audit: BPF prog-id=86 op=LOAD Jan 14 00:30:22.685000 audit: BPF prog-id=87 op=LOAD Jan 14 00:30:22.685000 audit[2582]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2557 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431306162613230336239316633303636643161653238613063623633 Jan 14 00:30:22.685000 audit: BPF prog-id=87 op=UNLOAD Jan 14 00:30:22.685000 audit[2582]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431306162613230336239316633303636643161653238613063623633 Jan 14 00:30:22.685000 audit: BPF prog-id=88 op=LOAD Jan 14 00:30:22.685000 audit[2582]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2557 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431306162613230336239316633303636643161653238613063623633 Jan 14 00:30:22.685000 audit: BPF prog-id=89 op=LOAD Jan 14 00:30:22.685000 audit[2582]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=2557 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431306162613230336239316633303636643161653238613063623633 Jan 14 00:30:22.686000 audit: BPF prog-id=89 op=UNLOAD Jan 14 00:30:22.686000 audit[2582]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431306162613230336239316633303636643161653238613063623633 Jan 14 00:30:22.687000 audit: BPF prog-id=88 op=UNLOAD Jan 14 00:30:22.687000 audit[2582]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431306162613230336239316633303636643161653238613063623633 Jan 14 00:30:22.687000 audit: BPF prog-id=90 op=LOAD Jan 14 00:30:22.687000 audit[2582]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=2557 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431306162613230336239316633303636643161653238613063623633 Jan 14 00:30:22.702066 systemd[1]: Started cri-containerd-5ebb241df3261143e8f8355e8a7946001cc8b878bcac61e914af404984d09873.scope - libcontainer container 5ebb241df3261143e8f8355e8a7946001cc8b878bcac61e914af404984d09873. Jan 14 00:30:22.731000 audit: BPF prog-id=91 op=LOAD Jan 14 00:30:22.732000 audit: BPF prog-id=92 op=LOAD Jan 14 00:30:22.732000 audit[2627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2603 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565626232343164663332363131343365386638333535653861373934 Jan 14 00:30:22.735000 audit: BPF prog-id=92 op=UNLOAD Jan 14 00:30:22.735000 audit[2627]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2603 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565626232343164663332363131343365386638333535653861373934 Jan 14 00:30:22.735000 audit: BPF prog-id=93 op=LOAD Jan 14 00:30:22.735000 audit[2627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2603 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565626232343164663332363131343365386638333535653861373934 Jan 14 00:30:22.735000 audit: BPF prog-id=94 op=LOAD Jan 14 00:30:22.735000 audit[2627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=2603 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565626232343164663332363131343365386638333535653861373934 Jan 14 00:30:22.735000 audit: BPF prog-id=94 op=UNLOAD Jan 14 00:30:22.735000 audit[2627]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2603 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565626232343164663332363131343365386638333535653861373934 Jan 14 00:30:22.735000 audit: BPF prog-id=93 op=UNLOAD Jan 14 00:30:22.735000 audit[2627]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2603 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565626232343164663332363131343365386638333535653861373934 Jan 14 00:30:22.735000 audit: BPF prog-id=95 op=LOAD Jan 14 00:30:22.735000 audit[2627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=2603 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565626232343164663332363131343365386638333535653861373934 Jan 14 00:30:22.754462 containerd[1612]: time="2026-01-14T00:30:22.754322685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-n-a43761813d,Uid:f6aaadd855ee6529b0c772ffb0c49199,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ca2f0c7b10fd2a0e61d9a94d159a99ad21654dc1e0b36ada49c2f1f3c371fec\"" Jan 14 00:30:22.771219 containerd[1612]: time="2026-01-14T00:30:22.771172450Z" level=info msg="CreateContainer within sandbox \"2ca2f0c7b10fd2a0e61d9a94d159a99ad21654dc1e0b36ada49c2f1f3c371fec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 00:30:22.772753 containerd[1612]: time="2026-01-14T00:30:22.772699729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-n-a43761813d,Uid:7f8060305b76c87a1ad5ff6687a656f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d10aba203b91f3066d1ae28a0cb630b12ad3e182c90f4a3bb4ce2ee54ddb7708\"" Jan 14 00:30:22.787435 containerd[1612]: time="2026-01-14T00:30:22.787388429Z" level=info msg="Container 4584440fd5fccacb2ebe0e3057c14853042c20eeb4feac332148aa12fd636952: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:30:22.793418 containerd[1612]: time="2026-01-14T00:30:22.793353403Z" level=info msg="CreateContainer within sandbox \"d10aba203b91f3066d1ae28a0cb630b12ad3e182c90f4a3bb4ce2ee54ddb7708\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 00:30:22.796429 containerd[1612]: time="2026-01-14T00:30:22.796347721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-n-a43761813d,Uid:32e3d1a4cdc9f2ec88fc303b5185b27a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ebb241df3261143e8f8355e8a7946001cc8b878bcac61e914af404984d09873\"" Jan 14 00:30:22.804099 containerd[1612]: time="2026-01-14T00:30:22.803951142Z" level=info msg="CreateContainer within sandbox \"5ebb241df3261143e8f8355e8a7946001cc8b878bcac61e914af404984d09873\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 00:30:22.807120 containerd[1612]: time="2026-01-14T00:30:22.807076535Z" level=info msg="CreateContainer within sandbox \"2ca2f0c7b10fd2a0e61d9a94d159a99ad21654dc1e0b36ada49c2f1f3c371fec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4584440fd5fccacb2ebe0e3057c14853042c20eeb4feac332148aa12fd636952\"" Jan 14 00:30:22.808879 containerd[1612]: time="2026-01-14T00:30:22.807951122Z" level=info msg="StartContainer for \"4584440fd5fccacb2ebe0e3057c14853042c20eeb4feac332148aa12fd636952\"" Jan 14 00:30:22.809996 containerd[1612]: time="2026-01-14T00:30:22.809348567Z" level=info msg="connecting to shim 4584440fd5fccacb2ebe0e3057c14853042c20eeb4feac332148aa12fd636952" address="unix:///run/containerd/s/9c504c6edac1528ba473e823368df47d10899a89595ec15f1c3f0738d1227e17" protocol=ttrpc version=3 Jan 14 00:30:22.821300 containerd[1612]: time="2026-01-14T00:30:22.821249852Z" level=info msg="Container 0ec656bf17eff951feffd967e5927545d4cf9ec24b5f13d6c2e4b347262e93b4: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:30:22.824204 containerd[1612]: time="2026-01-14T00:30:22.824156628Z" level=info msg="Container f928e1a237cedced6819b194837ef03a1098f023eba5dc08691e3e7f5ec451a0: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:30:22.834666 containerd[1612]: time="2026-01-14T00:30:22.834617457Z" level=info msg="CreateContainer within sandbox \"d10aba203b91f3066d1ae28a0cb630b12ad3e182c90f4a3bb4ce2ee54ddb7708\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0ec656bf17eff951feffd967e5927545d4cf9ec24b5f13d6c2e4b347262e93b4\"" Jan 14 00:30:22.835170 systemd[1]: Started cri-containerd-4584440fd5fccacb2ebe0e3057c14853042c20eeb4feac332148aa12fd636952.scope - libcontainer container 4584440fd5fccacb2ebe0e3057c14853042c20eeb4feac332148aa12fd636952. Jan 14 00:30:22.838237 containerd[1612]: time="2026-01-14T00:30:22.838192396Z" level=info msg="StartContainer for \"0ec656bf17eff951feffd967e5927545d4cf9ec24b5f13d6c2e4b347262e93b4\"" Jan 14 00:30:22.841452 containerd[1612]: time="2026-01-14T00:30:22.841408530Z" level=info msg="connecting to shim 0ec656bf17eff951feffd967e5927545d4cf9ec24b5f13d6c2e4b347262e93b4" address="unix:///run/containerd/s/fa4c3e249692259f9c4cc8a02af2d6e0e8eb4e3cab9ebfb95c96c41c1cc73ba2" protocol=ttrpc version=3 Jan 14 00:30:22.845672 containerd[1612]: time="2026-01-14T00:30:22.845616733Z" level=info msg="CreateContainer within sandbox \"5ebb241df3261143e8f8355e8a7946001cc8b878bcac61e914af404984d09873\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f928e1a237cedced6819b194837ef03a1098f023eba5dc08691e3e7f5ec451a0\"" Jan 14 00:30:22.846520 containerd[1612]: time="2026-01-14T00:30:22.846488922Z" level=info msg="StartContainer for \"f928e1a237cedced6819b194837ef03a1098f023eba5dc08691e3e7f5ec451a0\"" Jan 14 00:30:22.848443 containerd[1612]: time="2026-01-14T00:30:22.848408025Z" level=info msg="connecting to shim f928e1a237cedced6819b194837ef03a1098f023eba5dc08691e3e7f5ec451a0" address="unix:///run/containerd/s/d01a5a656ffd85306da3b630deea66c89c28219eab10d276facad698527fb8f7" protocol=ttrpc version=3 Jan 14 00:30:22.860000 audit: BPF prog-id=96 op=LOAD Jan 14 00:30:22.861000 audit: BPF prog-id=97 op=LOAD Jan 14 00:30:22.861000 audit[2674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2544 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435383434343066643566636361636232656265306533303537633134 Jan 14 00:30:22.861000 audit: BPF prog-id=97 op=UNLOAD Jan 14 00:30:22.861000 audit[2674]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2544 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435383434343066643566636361636232656265306533303537633134 Jan 14 00:30:22.862000 audit: BPF prog-id=98 op=LOAD Jan 14 00:30:22.862000 audit[2674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2544 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435383434343066643566636361636232656265306533303537633134 Jan 14 00:30:22.862000 audit: BPF prog-id=99 op=LOAD Jan 14 00:30:22.862000 audit[2674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2544 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435383434343066643566636361636232656265306533303537633134 Jan 14 00:30:22.862000 audit: BPF prog-id=99 op=UNLOAD Jan 14 00:30:22.862000 audit[2674]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2544 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435383434343066643566636361636232656265306533303537633134 Jan 14 00:30:22.862000 audit: BPF prog-id=98 op=UNLOAD Jan 14 00:30:22.862000 audit[2674]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2544 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435383434343066643566636361636232656265306533303537633134 Jan 14 00:30:22.862000 audit: BPF prog-id=100 op=LOAD Jan 14 00:30:22.862000 audit[2674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2544 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435383434343066643566636361636232656265306533303537633134 Jan 14 00:30:22.873156 systemd[1]: Started cri-containerd-0ec656bf17eff951feffd967e5927545d4cf9ec24b5f13d6c2e4b347262e93b4.scope - libcontainer container 0ec656bf17eff951feffd967e5927545d4cf9ec24b5f13d6c2e4b347262e93b4. Jan 14 00:30:22.898190 systemd[1]: Started cri-containerd-f928e1a237cedced6819b194837ef03a1098f023eba5dc08691e3e7f5ec451a0.scope - libcontainer container f928e1a237cedced6819b194837ef03a1098f023eba5dc08691e3e7f5ec451a0. Jan 14 00:30:22.900262 kubelet[2475]: I0114 00:30:22.900216 2475 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.900631 kubelet[2475]: E0114 00:30:22.900599 2475 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.0.249:6443/api/v1/nodes\": dial tcp 91.99.0.249:6443: connect: connection refused" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:22.910000 audit: BPF prog-id=101 op=LOAD Jan 14 00:30:22.911000 audit: BPF prog-id=102 op=LOAD Jan 14 00:30:22.911000 audit[2696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2557 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065633635366266313765666639353166656666643936376535393237 Jan 14 00:30:22.911000 audit: BPF prog-id=102 op=UNLOAD Jan 14 00:30:22.911000 audit[2696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065633635366266313765666639353166656666643936376535393237 Jan 14 00:30:22.912000 audit: BPF prog-id=103 op=LOAD Jan 14 00:30:22.912000 audit[2696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2557 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065633635366266313765666639353166656666643936376535393237 Jan 14 00:30:22.912000 audit: BPF prog-id=104 op=LOAD Jan 14 00:30:22.912000 audit[2696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=2557 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065633635366266313765666639353166656666643936376535393237 Jan 14 00:30:22.913000 audit: BPF prog-id=104 op=UNLOAD Jan 14 00:30:22.913000 audit[2696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065633635366266313765666639353166656666643936376535393237 Jan 14 00:30:22.913000 audit: BPF prog-id=103 op=UNLOAD Jan 14 00:30:22.913000 audit[2696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065633635366266313765666639353166656666643936376535393237 Jan 14 00:30:22.913000 audit: BPF prog-id=105 op=LOAD Jan 14 00:30:22.913000 audit[2696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=2557 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065633635366266313765666639353166656666643936376535393237 Jan 14 00:30:22.932087 containerd[1612]: time="2026-01-14T00:30:22.932036214Z" level=info msg="StartContainer for \"4584440fd5fccacb2ebe0e3057c14853042c20eeb4feac332148aa12fd636952\" returns successfully" Jan 14 00:30:22.946000 audit: BPF prog-id=106 op=LOAD Jan 14 00:30:22.948000 audit: BPF prog-id=107 op=LOAD Jan 14 00:30:22.948000 audit[2697]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2603 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639323865316132333763656463656436383139623139343833376566 Jan 14 00:30:22.948000 audit: BPF prog-id=107 op=UNLOAD Jan 14 00:30:22.948000 audit[2697]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2603 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639323865316132333763656463656436383139623139343833376566 Jan 14 00:30:22.948000 audit: BPF prog-id=108 op=LOAD Jan 14 00:30:22.948000 audit[2697]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2603 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639323865316132333763656463656436383139623139343833376566 Jan 14 00:30:22.948000 audit: BPF prog-id=109 op=LOAD Jan 14 00:30:22.948000 audit[2697]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2603 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639323865316132333763656463656436383139623139343833376566 Jan 14 00:30:22.948000 audit: BPF prog-id=109 op=UNLOAD Jan 14 00:30:22.948000 audit[2697]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2603 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639323865316132333763656463656436383139623139343833376566 Jan 14 00:30:22.948000 audit: BPF prog-id=108 op=UNLOAD Jan 14 00:30:22.948000 audit[2697]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2603 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639323865316132333763656463656436383139623139343833376566 Jan 14 00:30:22.948000 audit: BPF prog-id=110 op=LOAD Jan 14 00:30:22.948000 audit[2697]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2603 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:22.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639323865316132333763656463656436383139623139343833376566 Jan 14 00:30:22.967892 containerd[1612]: time="2026-01-14T00:30:22.967804748Z" level=info msg="StartContainer for \"0ec656bf17eff951feffd967e5927545d4cf9ec24b5f13d6c2e4b347262e93b4\" returns successfully" Jan 14 00:30:23.048958 kubelet[2475]: E0114 00:30:23.048904 2475 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://91.99.0.249:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.0.249:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 00:30:23.128181 containerd[1612]: time="2026-01-14T00:30:23.128120326Z" level=info msg="StartContainer for \"f928e1a237cedced6819b194837ef03a1098f023eba5dc08691e3e7f5ec451a0\" returns successfully" Jan 14 00:30:23.150637 kubelet[2475]: E0114 00:30:23.150592 2475 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-a43761813d\" not found" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:23.159419 kubelet[2475]: E0114 00:30:23.159370 2475 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-a43761813d\" not found" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:23.702472 kubelet[2475]: I0114 00:30:23.702430 2475 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:23.787881 sshd[2484]: Connection closed by authenticating user root 5.187.35.21 port 16934 [preauth] Jan 14 00:30:23.786000 audit[2484]: USER_ERR pid=2484 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:23.793743 systemd[1]: sshd@15-91.99.0.249:22-5.187.35.21:16934.service: Deactivated successfully. Jan 14 00:30:23.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-91.99.0.249:22-5.187.35.21:16934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:23.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-91.99.0.249:22-5.187.35.21:42984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:23.813163 systemd[1]: Started sshd@16-91.99.0.249:22-5.187.35.21:42984.service - OpenSSH per-connection server daemon (5.187.35.21:42984). Jan 14 00:30:24.162182 kubelet[2475]: E0114 00:30:24.162130 2475 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-a43761813d\" not found" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:24.163499 kubelet[2475]: E0114 00:30:24.163078 2475 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-a43761813d\" not found" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:25.164319 kubelet[2475]: E0114 00:30:25.164031 2475 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-a43761813d\" not found" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:26.103322 kubelet[2475]: E0114 00:30:26.103273 2475 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547-0-0-n-a43761813d\" not found" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:26.172242 kubelet[2475]: I0114 00:30:26.171783 2475 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:26.172242 kubelet[2475]: E0114 00:30:26.171845 2475 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4547-0-0-n-a43761813d\": node \"ci-4547-0-0-n-a43761813d\" not found" Jan 14 00:30:26.267961 kubelet[2475]: I0114 00:30:26.265869 2475 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-n-a43761813d" Jan 14 00:30:26.288286 kubelet[2475]: E0114 00:30:26.288248 2475 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-n-a43761813d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547-0-0-n-a43761813d" Jan 14 00:30:26.288478 kubelet[2475]: I0114 00:30:26.288463 2475 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a43761813d" Jan 14 00:30:26.293217 kubelet[2475]: E0114 00:30:26.293182 2475 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547-0-0-n-a43761813d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a43761813d" Jan 14 00:30:26.293391 kubelet[2475]: I0114 00:30:26.293376 2475 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-n-a43761813d" Jan 14 00:30:26.295518 kubelet[2475]: E0114 00:30:26.295476 2475 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-n-a43761813d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547-0-0-n-a43761813d" Jan 14 00:30:27.034858 kubelet[2475]: I0114 00:30:27.033500 2475 apiserver.go:52] "Watching apiserver" Jan 14 00:30:27.074455 kubelet[2475]: I0114 00:30:27.074385 2475 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 00:30:27.489358 sshd[2777]: Connection closed by authenticating user root 5.187.35.21 port 42984 [preauth] Jan 14 00:30:27.488000 audit[2777]: USER_ERR pid=2777 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:27.493851 kernel: kauditd_printk_skb: 161 callbacks suppressed Jan 14 00:30:27.493991 kernel: audit: type=1109 audit(1768350627.488:414): pid=2777 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:27.495725 systemd[1]: sshd@16-91.99.0.249:22-5.187.35.21:42984.service: Deactivated successfully. Jan 14 00:30:27.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-91.99.0.249:22-5.187.35.21:42984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:27.502845 kernel: audit: type=1131 audit(1768350627.497:415): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-91.99.0.249:22-5.187.35.21:42984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:27.514223 systemd[1]: Started sshd@17-91.99.0.249:22-5.187.35.21:42998.service - OpenSSH per-connection server daemon (5.187.35.21:42998). Jan 14 00:30:27.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-91.99.0.249:22-5.187.35.21:42998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:27.517994 kernel: audit: type=1130 audit(1768350627.512:416): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-91.99.0.249:22-5.187.35.21:42998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:27.660987 kubelet[2475]: I0114 00:30:27.660938 2475 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-n-a43761813d" Jan 14 00:30:28.977141 systemd[1]: Reload requested from client PID 2792 ('systemctl') (unit session-8.scope)... Jan 14 00:30:28.977163 systemd[1]: Reloading... Jan 14 00:30:29.101988 zram_generator::config[2847]: No configuration found. Jan 14 00:30:29.336739 systemd[1]: Reloading finished in 359 ms. Jan 14 00:30:29.374581 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:30:29.388769 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 00:30:29.389479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:30:29.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:29.389626 systemd[1]: kubelet.service: Consumed 2.489s CPU time, 127.6M memory peak. Jan 14 00:30:29.392889 kernel: audit: type=1131 audit(1768350629.388:417): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:29.399179 kernel: audit: type=1334 audit(1768350629.396:418): prog-id=111 op=LOAD Jan 14 00:30:29.399321 kernel: audit: type=1334 audit(1768350629.396:419): prog-id=75 op=UNLOAD Jan 14 00:30:29.399346 kernel: audit: type=1334 audit(1768350629.398:420): prog-id=112 op=LOAD Jan 14 00:30:29.396000 audit: BPF prog-id=111 op=LOAD Jan 14 00:30:29.396000 audit: BPF prog-id=75 op=UNLOAD Jan 14 00:30:29.398000 audit: BPF prog-id=112 op=LOAD Jan 14 00:30:29.396627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:30:29.402406 kernel: audit: type=1334 audit(1768350629.399:421): prog-id=113 op=LOAD Jan 14 00:30:29.402533 kernel: audit: type=1334 audit(1768350629.399:422): prog-id=76 op=UNLOAD Jan 14 00:30:29.402554 kernel: audit: type=1334 audit(1768350629.399:423): prog-id=77 op=UNLOAD Jan 14 00:30:29.399000 audit: BPF prog-id=113 op=LOAD Jan 14 00:30:29.399000 audit: BPF prog-id=76 op=UNLOAD Jan 14 00:30:29.399000 audit: BPF prog-id=77 op=UNLOAD Jan 14 00:30:29.399000 audit: BPF prog-id=114 op=LOAD Jan 14 00:30:29.399000 audit: BPF prog-id=64 op=UNLOAD Jan 14 00:30:29.399000 audit: BPF prog-id=115 op=LOAD Jan 14 00:30:29.399000 audit: BPF prog-id=116 op=LOAD Jan 14 00:30:29.399000 audit: BPF prog-id=65 op=UNLOAD Jan 14 00:30:29.399000 audit: BPF prog-id=66 op=UNLOAD Jan 14 00:30:29.399000 audit: BPF prog-id=117 op=LOAD Jan 14 00:30:29.399000 audit: BPF prog-id=67 op=UNLOAD Jan 14 00:30:29.401000 audit: BPF prog-id=118 op=LOAD Jan 14 00:30:29.401000 audit: BPF prog-id=74 op=UNLOAD Jan 14 00:30:29.403000 audit: BPF prog-id=119 op=LOAD Jan 14 00:30:29.403000 audit: BPF prog-id=78 op=UNLOAD Jan 14 00:30:29.403000 audit: BPF prog-id=120 op=LOAD Jan 14 00:30:29.403000 audit: BPF prog-id=121 op=LOAD Jan 14 00:30:29.403000 audit: BPF prog-id=79 op=UNLOAD Jan 14 00:30:29.403000 audit: BPF prog-id=80 op=UNLOAD Jan 14 00:30:29.404000 audit: BPF prog-id=122 op=LOAD Jan 14 00:30:29.404000 audit: BPF prog-id=71 op=UNLOAD Jan 14 00:30:29.404000 audit: BPF prog-id=123 op=LOAD Jan 14 00:30:29.404000 audit: BPF prog-id=124 op=LOAD Jan 14 00:30:29.404000 audit: BPF prog-id=72 op=UNLOAD Jan 14 00:30:29.404000 audit: BPF prog-id=73 op=UNLOAD Jan 14 00:30:29.405000 audit: BPF prog-id=125 op=LOAD Jan 14 00:30:29.405000 audit: BPF prog-id=68 op=UNLOAD Jan 14 00:30:29.405000 audit: BPF prog-id=126 op=LOAD Jan 14 00:30:29.405000 audit: BPF prog-id=127 op=LOAD Jan 14 00:30:29.405000 audit: BPF prog-id=69 op=UNLOAD Jan 14 00:30:29.405000 audit: BPF prog-id=70 op=UNLOAD Jan 14 00:30:29.405000 audit: BPF prog-id=128 op=LOAD Jan 14 00:30:29.406000 audit: BPF prog-id=129 op=LOAD Jan 14 00:30:29.406000 audit: BPF prog-id=62 op=UNLOAD Jan 14 00:30:29.406000 audit: BPF prog-id=63 op=UNLOAD Jan 14 00:30:29.406000 audit: BPF prog-id=130 op=LOAD Jan 14 00:30:29.406000 audit: BPF prog-id=61 op=UNLOAD Jan 14 00:30:29.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:29.582101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:30:29.596306 (kubelet)[2886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 00:30:29.672844 kubelet[2886]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:30:29.672844 kubelet[2886]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 00:30:29.672844 kubelet[2886]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:30:29.672844 kubelet[2886]: I0114 00:30:29.672382 2886 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 00:30:29.686374 kubelet[2886]: I0114 00:30:29.686331 2886 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 00:30:29.686659 kubelet[2886]: I0114 00:30:29.686643 2886 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 00:30:29.687930 kubelet[2886]: I0114 00:30:29.687894 2886 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 00:30:29.693246 kubelet[2886]: I0114 00:30:29.693118 2886 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 00:30:29.699928 kubelet[2886]: I0114 00:30:29.699729 2886 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 00:30:29.710610 kubelet[2886]: I0114 00:30:29.710572 2886 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 00:30:29.718526 kubelet[2886]: I0114 00:30:29.718225 2886 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 00:30:29.720010 kubelet[2886]: I0114 00:30:29.719453 2886 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 00:30:29.720756 kubelet[2886]: I0114 00:30:29.720270 2886 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-n-a43761813d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 00:30:29.721267 kubelet[2886]: I0114 00:30:29.721159 2886 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 00:30:29.721418 kubelet[2886]: I0114 00:30:29.721363 2886 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 00:30:29.721748 kubelet[2886]: I0114 00:30:29.721646 2886 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:30:29.722315 kubelet[2886]: I0114 00:30:29.722049 2886 kubelet.go:480] "Attempting to sync node with API server" Jan 14 00:30:29.722574 kubelet[2886]: I0114 00:30:29.722556 2886 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 00:30:29.722981 kubelet[2886]: I0114 00:30:29.722874 2886 kubelet.go:386] "Adding apiserver pod source" Jan 14 00:30:29.723265 kubelet[2886]: I0114 00:30:29.723119 2886 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 00:30:29.726764 kubelet[2886]: I0114 00:30:29.726717 2886 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 00:30:29.728663 kubelet[2886]: I0114 00:30:29.728226 2886 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 00:30:29.733252 kubelet[2886]: I0114 00:30:29.733218 2886 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 00:30:29.733252 kubelet[2886]: I0114 00:30:29.733270 2886 server.go:1289] "Started kubelet" Jan 14 00:30:29.737428 kubelet[2886]: I0114 00:30:29.736552 2886 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 00:30:29.739493 kubelet[2886]: I0114 00:30:29.738721 2886 server.go:317] "Adding debug handlers to kubelet server" Jan 14 00:30:29.744180 kubelet[2886]: I0114 00:30:29.737023 2886 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 00:30:29.744462 kubelet[2886]: I0114 00:30:29.744435 2886 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 00:30:29.745445 kubelet[2886]: I0114 00:30:29.744655 2886 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 00:30:29.746250 kubelet[2886]: I0114 00:30:29.746205 2886 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 00:30:29.752443 kubelet[2886]: I0114 00:30:29.752404 2886 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 00:30:29.754508 kubelet[2886]: I0114 00:30:29.754479 2886 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 00:30:29.754945 kubelet[2886]: E0114 00:30:29.754917 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-n-a43761813d\" not found" Jan 14 00:30:29.759440 kubelet[2886]: I0114 00:30:29.759407 2886 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 00:30:29.759786 kubelet[2886]: I0114 00:30:29.759772 2886 reconciler.go:26] "Reconciler: start to sync state" Jan 14 00:30:29.764511 kubelet[2886]: I0114 00:30:29.764435 2886 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 00:30:29.764885 kubelet[2886]: I0114 00:30:29.764655 2886 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 00:30:29.764885 kubelet[2886]: I0114 00:30:29.764687 2886 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 00:30:29.764885 kubelet[2886]: I0114 00:30:29.764695 2886 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 00:30:29.764885 kubelet[2886]: E0114 00:30:29.764741 2886 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 00:30:29.775179 kubelet[2886]: I0114 00:30:29.775151 2886 factory.go:223] Registration of the systemd container factory successfully Jan 14 00:30:29.775466 kubelet[2886]: I0114 00:30:29.775441 2886 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 00:30:29.783866 kubelet[2886]: E0114 00:30:29.782517 2886 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 00:30:29.783866 kubelet[2886]: I0114 00:30:29.782667 2886 factory.go:223] Registration of the containerd container factory successfully Jan 14 00:30:29.865256 kubelet[2886]: E0114 00:30:29.865093 2886 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 00:30:29.872882 kubelet[2886]: I0114 00:30:29.872849 2886 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 00:30:29.872882 kubelet[2886]: I0114 00:30:29.872874 2886 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 00:30:29.873051 kubelet[2886]: I0114 00:30:29.872903 2886 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:30:29.873102 kubelet[2886]: I0114 00:30:29.873083 2886 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 00:30:29.873128 kubelet[2886]: I0114 00:30:29.873102 2886 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 00:30:29.873152 kubelet[2886]: I0114 00:30:29.873129 2886 policy_none.go:49] "None policy: Start" Jan 14 00:30:29.873152 kubelet[2886]: I0114 00:30:29.873144 2886 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 00:30:29.873195 kubelet[2886]: I0114 00:30:29.873156 2886 state_mem.go:35] "Initializing new in-memory state store" Jan 14 00:30:29.873302 kubelet[2886]: I0114 00:30:29.873289 2886 state_mem.go:75] "Updated machine memory state" Jan 14 00:30:29.879160 kubelet[2886]: E0114 00:30:29.878792 2886 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 00:30:29.879502 kubelet[2886]: I0114 00:30:29.879471 2886 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 00:30:29.879552 kubelet[2886]: I0114 00:30:29.879515 2886 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 00:30:29.881012 kubelet[2886]: E0114 00:30:29.880797 2886 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 00:30:29.882062 kubelet[2886]: I0114 00:30:29.882026 2886 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 00:30:29.989442 kubelet[2886]: I0114 00:30:29.989397 2886 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:30.054084 kubelet[2886]: I0114 00:30:30.054043 2886 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:30.054427 kubelet[2886]: I0114 00:30:30.054145 2886 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-n-a43761813d" Jan 14 00:30:30.067657 kubelet[2886]: I0114 00:30:30.067077 2886 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-n-a43761813d" Jan 14 00:30:30.067657 kubelet[2886]: I0114 00:30:30.067277 2886 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a43761813d" Jan 14 00:30:30.068928 kubelet[2886]: I0114 00:30:30.068860 2886 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-n-a43761813d" Jan 14 00:30:30.163058 kubelet[2886]: I0114 00:30:30.162533 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f8060305b76c87a1ad5ff6687a656f9-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-a43761813d\" (UID: \"7f8060305b76c87a1ad5ff6687a656f9\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a43761813d" Jan 14 00:30:30.163058 kubelet[2886]: I0114 00:30:30.162582 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32e3d1a4cdc9f2ec88fc303b5185b27a-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-n-a43761813d\" (UID: \"32e3d1a4cdc9f2ec88fc303b5185b27a\") " pod="kube-system/kube-scheduler-ci-4547-0-0-n-a43761813d" Jan 14 00:30:30.163058 kubelet[2886]: I0114 00:30:30.162603 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6aaadd855ee6529b0c772ffb0c49199-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-n-a43761813d\" (UID: \"f6aaadd855ee6529b0c772ffb0c49199\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-a43761813d" Jan 14 00:30:30.163058 kubelet[2886]: I0114 00:30:30.162618 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6aaadd855ee6529b0c772ffb0c49199-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-n-a43761813d\" (UID: \"f6aaadd855ee6529b0c772ffb0c49199\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-a43761813d" Jan 14 00:30:30.163058 kubelet[2886]: I0114 00:30:30.162638 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6aaadd855ee6529b0c772ffb0c49199-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-n-a43761813d\" (UID: \"f6aaadd855ee6529b0c772ffb0c49199\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-a43761813d" Jan 14 00:30:30.163314 kubelet[2886]: I0114 00:30:30.162655 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f8060305b76c87a1ad5ff6687a656f9-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-a43761813d\" (UID: \"7f8060305b76c87a1ad5ff6687a656f9\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a43761813d" Jan 14 00:30:30.163314 kubelet[2886]: I0114 00:30:30.162697 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7f8060305b76c87a1ad5ff6687a656f9-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-n-a43761813d\" (UID: \"7f8060305b76c87a1ad5ff6687a656f9\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a43761813d" Jan 14 00:30:30.163314 kubelet[2886]: I0114 00:30:30.162715 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f8060305b76c87a1ad5ff6687a656f9-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-n-a43761813d\" (UID: \"7f8060305b76c87a1ad5ff6687a656f9\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a43761813d" Jan 14 00:30:30.163314 kubelet[2886]: I0114 00:30:30.162731 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f8060305b76c87a1ad5ff6687a656f9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-n-a43761813d\" (UID: \"7f8060305b76c87a1ad5ff6687a656f9\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a43761813d" Jan 14 00:30:30.165941 kubelet[2886]: E0114 00:30:30.165806 2886 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-n-a43761813d\" already exists" pod="kube-system/kube-scheduler-ci-4547-0-0-n-a43761813d" Jan 14 00:30:30.735001 kubelet[2886]: I0114 00:30:30.734901 2886 apiserver.go:52] "Watching apiserver" Jan 14 00:30:30.760397 kubelet[2886]: I0114 00:30:30.760302 2886 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 00:30:30.768959 sshd[2788]: Connection closed by authenticating user root 5.187.35.21 port 42998 [preauth] Jan 14 00:30:30.769000 audit[2788]: USER_ERR pid=2788 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:30.775690 systemd[1]: sshd@17-91.99.0.249:22-5.187.35.21:42998.service: Deactivated successfully. Jan 14 00:30:30.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-91.99.0.249:22-5.187.35.21:42998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:30.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-91.99.0.249:22-5.187.35.21:43014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:30.796244 systemd[1]: Started sshd@18-91.99.0.249:22-5.187.35.21:43014.service - OpenSSH per-connection server daemon (5.187.35.21:43014). Jan 14 00:30:30.904269 kubelet[2886]: I0114 00:30:30.904099 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547-0-0-n-a43761813d" podStartSLOduration=0.903973608 podStartE2EDuration="903.973608ms" podCreationTimestamp="2026-01-14 00:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:30:30.882329464 +0000 UTC m=+1.278425299" watchObservedRunningTime="2026-01-14 00:30:30.903973608 +0000 UTC m=+1.300069443" Jan 14 00:30:30.905360 kubelet[2886]: I0114 00:30:30.904630 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a43761813d" podStartSLOduration=0.904485128 podStartE2EDuration="904.485128ms" podCreationTimestamp="2026-01-14 00:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:30:30.9036347 +0000 UTC m=+1.299730535" watchObservedRunningTime="2026-01-14 00:30:30.904485128 +0000 UTC m=+1.300580963" Jan 14 00:30:30.950866 kubelet[2886]: I0114 00:30:30.950426 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547-0-0-n-a43761813d" podStartSLOduration=3.950392431 podStartE2EDuration="3.950392431s" podCreationTimestamp="2026-01-14 00:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:30:30.927511971 +0000 UTC m=+1.323607806" watchObservedRunningTime="2026-01-14 00:30:30.950392431 +0000 UTC m=+1.346488506" Jan 14 00:30:34.306350 kubelet[2886]: I0114 00:30:34.304006 2886 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 00:30:34.307413 containerd[1612]: time="2026-01-14T00:30:34.307343696Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 00:30:34.308212 kubelet[2886]: I0114 00:30:34.308098 2886 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 00:30:34.325005 sshd[2930]: Connection closed by authenticating user root 5.187.35.21 port 43014 [preauth] Jan 14 00:30:34.329876 kernel: kauditd_printk_skb: 38 callbacks suppressed Jan 14 00:30:34.330083 kernel: audit: type=1109 audit(1768350634.325:462): pid=2930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:34.325000 audit[2930]: USER_ERR pid=2930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:34.331310 systemd[1]: sshd@18-91.99.0.249:22-5.187.35.21:43014.service: Deactivated successfully. Jan 14 00:30:34.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-91.99.0.249:22-5.187.35.21:43014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:34.337890 kernel: audit: type=1131 audit(1768350634.333:463): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-91.99.0.249:22-5.187.35.21:43014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:34.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-91.99.0.249:22-5.187.35.21:58056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:34.364774 systemd[1]: Started sshd@19-91.99.0.249:22-5.187.35.21:58056.service - OpenSSH per-connection server daemon (5.187.35.21:58056). Jan 14 00:30:34.370864 kernel: audit: type=1130 audit(1768350634.364:464): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-91.99.0.249:22-5.187.35.21:58056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:34.825659 systemd[1]: Created slice kubepods-besteffort-pod77f9f82b_13a8_42a5_907c_2fe4ddbad8a8.slice - libcontainer container kubepods-besteffort-pod77f9f82b_13a8_42a5_907c_2fe4ddbad8a8.slice. Jan 14 00:30:34.900853 kubelet[2886]: I0114 00:30:34.900726 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/77f9f82b-13a8-42a5-907c-2fe4ddbad8a8-kube-proxy\") pod \"kube-proxy-vg96w\" (UID: \"77f9f82b-13a8-42a5-907c-2fe4ddbad8a8\") " pod="kube-system/kube-proxy-vg96w" Jan 14 00:30:34.900853 kubelet[2886]: I0114 00:30:34.900848 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77f9f82b-13a8-42a5-907c-2fe4ddbad8a8-xtables-lock\") pod \"kube-proxy-vg96w\" (UID: \"77f9f82b-13a8-42a5-907c-2fe4ddbad8a8\") " pod="kube-system/kube-proxy-vg96w" Jan 14 00:30:34.901188 kubelet[2886]: I0114 00:30:34.900886 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77f9f82b-13a8-42a5-907c-2fe4ddbad8a8-lib-modules\") pod \"kube-proxy-vg96w\" (UID: \"77f9f82b-13a8-42a5-907c-2fe4ddbad8a8\") " pod="kube-system/kube-proxy-vg96w" Jan 14 00:30:34.901188 kubelet[2886]: I0114 00:30:34.900938 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxz9b\" (UniqueName: \"kubernetes.io/projected/77f9f82b-13a8-42a5-907c-2fe4ddbad8a8-kube-api-access-mxz9b\") pod \"kube-proxy-vg96w\" (UID: \"77f9f82b-13a8-42a5-907c-2fe4ddbad8a8\") " pod="kube-system/kube-proxy-vg96w" Jan 14 00:30:35.015849 kubelet[2886]: E0114 00:30:35.014967 2886 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 14 00:30:35.015849 kubelet[2886]: E0114 00:30:35.015005 2886 projected.go:194] Error preparing data for projected volume kube-api-access-mxz9b for pod kube-system/kube-proxy-vg96w: configmap "kube-root-ca.crt" not found Jan 14 00:30:35.016092 kubelet[2886]: E0114 00:30:35.016065 2886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/77f9f82b-13a8-42a5-907c-2fe4ddbad8a8-kube-api-access-mxz9b podName:77f9f82b-13a8-42a5-907c-2fe4ddbad8a8 nodeName:}" failed. No retries permitted until 2026-01-14 00:30:35.515072594 +0000 UTC m=+5.911168429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mxz9b" (UniqueName: "kubernetes.io/projected/77f9f82b-13a8-42a5-907c-2fe4ddbad8a8-kube-api-access-mxz9b") pod "kube-proxy-vg96w" (UID: "77f9f82b-13a8-42a5-907c-2fe4ddbad8a8") : configmap "kube-root-ca.crt" not found Jan 14 00:30:35.450325 systemd[1]: Created slice kubepods-besteffort-pod1491ac50_45d9_4013_bdc2_d5d3d5650076.slice - libcontainer container kubepods-besteffort-pod1491ac50_45d9_4013_bdc2_d5d3d5650076.slice. Jan 14 00:30:35.506659 kubelet[2886]: I0114 00:30:35.506406 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x742\" (UniqueName: \"kubernetes.io/projected/1491ac50-45d9-4013-bdc2-d5d3d5650076-kube-api-access-6x742\") pod \"tigera-operator-7dcd859c48-8zlr6\" (UID: \"1491ac50-45d9-4013-bdc2-d5d3d5650076\") " pod="tigera-operator/tigera-operator-7dcd859c48-8zlr6" Jan 14 00:30:35.506659 kubelet[2886]: I0114 00:30:35.506514 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1491ac50-45d9-4013-bdc2-d5d3d5650076-var-lib-calico\") pod \"tigera-operator-7dcd859c48-8zlr6\" (UID: \"1491ac50-45d9-4013-bdc2-d5d3d5650076\") " pod="tigera-operator/tigera-operator-7dcd859c48-8zlr6" Jan 14 00:30:35.746217 containerd[1612]: time="2026-01-14T00:30:35.746054545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vg96w,Uid:77f9f82b-13a8-42a5-907c-2fe4ddbad8a8,Namespace:kube-system,Attempt:0,}" Jan 14 00:30:35.756646 containerd[1612]: time="2026-01-14T00:30:35.756543777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8zlr6,Uid:1491ac50-45d9-4013-bdc2-d5d3d5650076,Namespace:tigera-operator,Attempt:0,}" Jan 14 00:30:35.788749 containerd[1612]: time="2026-01-14T00:30:35.788632338Z" level=info msg="connecting to shim 128db7185de0540b5d9449149ea8310ffdf33f881e9b2b5530910bf636589af7" address="unix:///run/containerd/s/af359752a1969f6a0a1e89f0a9c1f0c3a88a88f76d0ca1f4a80c9e4ed6297c58" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:30:35.812363 containerd[1612]: time="2026-01-14T00:30:35.811409574Z" level=info msg="connecting to shim 9baa613353c1749552e7472fef947d75a1fceb08132c0dfb09eb57748bd85cbd" address="unix:///run/containerd/s/c1de1d4aec5203fb8e2adbff6cf5499c9e2aec30324f0a5d42e8ca52ad8f4033" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:30:35.839428 systemd[1]: Started cri-containerd-128db7185de0540b5d9449149ea8310ffdf33f881e9b2b5530910bf636589af7.scope - libcontainer container 128db7185de0540b5d9449149ea8310ffdf33f881e9b2b5530910bf636589af7. Jan 14 00:30:35.861163 systemd[1]: Started cri-containerd-9baa613353c1749552e7472fef947d75a1fceb08132c0dfb09eb57748bd85cbd.scope - libcontainer container 9baa613353c1749552e7472fef947d75a1fceb08132c0dfb09eb57748bd85cbd. Jan 14 00:30:35.862000 audit: BPF prog-id=131 op=LOAD Jan 14 00:30:35.864000 audit: BPF prog-id=132 op=LOAD Jan 14 00:30:35.867280 kernel: audit: type=1334 audit(1768350635.862:465): prog-id=131 op=LOAD Jan 14 00:30:35.867396 kernel: audit: type=1334 audit(1768350635.864:466): prog-id=132 op=LOAD Jan 14 00:30:35.864000 audit[2979]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2957 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:35.871550 kernel: audit: type=1300 audit(1768350635.864:466): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2957 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:35.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132386462373138356465303534306235643934343931343965613833 Jan 14 00:30:35.874823 kernel: audit: type=1327 audit(1768350635.864:466): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132386462373138356465303534306235643934343931343965613833 Jan 14 00:30:35.865000 audit: BPF prog-id=132 op=UNLOAD Jan 14 00:30:35.876324 kernel: audit: type=1334 audit(1768350635.865:467): prog-id=132 op=UNLOAD Jan 14 00:30:35.865000 audit[2979]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2957 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:35.879191 kernel: audit: type=1300 audit(1768350635.865:467): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2957 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:35.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132386462373138356465303534306235643934343931343965613833 Jan 14 00:30:35.884468 kernel: audit: type=1327 audit(1768350635.865:467): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132386462373138356465303534306235643934343931343965613833 Jan 14 00:30:35.865000 audit: BPF prog-id=133 op=LOAD Jan 14 00:30:35.865000 audit[2979]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2957 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:35.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132386462373138356465303534306235643934343931343965613833 Jan 14 00:30:35.865000 audit: BPF prog-id=134 op=LOAD Jan 14 00:30:35.865000 audit[2979]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=2957 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:35.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132386462373138356465303534306235643934343931343965613833 Jan 14 00:30:35.866000 audit: BPF prog-id=134 op=UNLOAD Jan 14 00:30:35.866000 audit[2979]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2957 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:35.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132386462373138356465303534306235643934343931343965613833 Jan 14 00:30:35.866000 audit: BPF prog-id=133 op=UNLOAD Jan 14 00:30:35.866000 audit[2979]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2957 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:35.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132386462373138356465303534306235643934343931343965613833 Jan 14 00:30:35.869000 audit: BPF prog-id=135 op=LOAD Jan 14 00:30:35.869000 audit[2979]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=2957 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:35.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132386462373138356465303534306235643934343931343965613833 Jan 14 00:30:35.908000 audit: BPF prog-id=136 op=LOAD Jan 14 00:30:35.909000 audit: BPF prog-id=137 op=LOAD Jan 14 00:30:35.909000 audit[3000]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2974 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:35.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962616136313333353363313734393535326537343732666566393437 Jan 14 00:30:35.911000 audit: BPF prog-id=137 op=UNLOAD Jan 14 00:30:35.911000 audit[3000]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:35.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962616136313333353363313734393535326537343732666566393437 Jan 14 00:30:35.911000 audit: BPF prog-id=138 op=LOAD Jan 14 00:30:35.911000 audit[3000]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2974 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:35.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962616136313333353363313734393535326537343732666566393437 Jan 14 00:30:35.911000 audit: BPF prog-id=139 op=LOAD Jan 14 00:30:35.911000 audit[3000]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2974 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:35.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962616136313333353363313734393535326537343732666566393437 Jan 14 00:30:35.911000 audit: BPF prog-id=139 op=UNLOAD Jan 14 00:30:35.911000 audit[3000]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:35.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962616136313333353363313734393535326537343732666566393437 Jan 14 00:30:35.911000 audit: BPF prog-id=138 op=UNLOAD Jan 14 00:30:35.911000 audit[3000]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:35.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962616136313333353363313734393535326537343732666566393437 Jan 14 00:30:35.912000 audit: BPF prog-id=140 op=LOAD Jan 14 00:30:35.912000 audit[3000]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2974 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:35.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962616136313333353363313734393535326537343732666566393437 Jan 14 00:30:35.924238 containerd[1612]: time="2026-01-14T00:30:35.923998081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vg96w,Uid:77f9f82b-13a8-42a5-907c-2fe4ddbad8a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"128db7185de0540b5d9449149ea8310ffdf33f881e9b2b5530910bf636589af7\"" Jan 14 00:30:35.934157 containerd[1612]: time="2026-01-14T00:30:35.934114419Z" level=info msg="CreateContainer within sandbox \"128db7185de0540b5d9449149ea8310ffdf33f881e9b2b5530910bf636589af7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 00:30:35.952001 containerd[1612]: time="2026-01-14T00:30:35.951952732Z" level=info msg="Container bf8b36391ee209688cbd6eb69c36a3570883654604b8b8abed6065c757127aae: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:30:35.963691 containerd[1612]: time="2026-01-14T00:30:35.963627029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8zlr6,Uid:1491ac50-45d9-4013-bdc2-d5d3d5650076,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9baa613353c1749552e7472fef947d75a1fceb08132c0dfb09eb57748bd85cbd\"" Jan 14 00:30:35.969662 containerd[1612]: time="2026-01-14T00:30:35.969588262Z" level=info msg="CreateContainer within sandbox \"128db7185de0540b5d9449149ea8310ffdf33f881e9b2b5530910bf636589af7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bf8b36391ee209688cbd6eb69c36a3570883654604b8b8abed6065c757127aae\"" Jan 14 00:30:35.970479 containerd[1612]: time="2026-01-14T00:30:35.969912411Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 00:30:35.970479 containerd[1612]: time="2026-01-14T00:30:35.970428785Z" level=info msg="StartContainer for \"bf8b36391ee209688cbd6eb69c36a3570883654604b8b8abed6065c757127aae\"" Jan 14 00:30:35.982587 containerd[1612]: time="2026-01-14T00:30:35.982537799Z" level=info msg="connecting to shim bf8b36391ee209688cbd6eb69c36a3570883654604b8b8abed6065c757127aae" address="unix:///run/containerd/s/af359752a1969f6a0a1e89f0a9c1f0c3a88a88f76d0ca1f4a80c9e4ed6297c58" protocol=ttrpc version=3 Jan 14 00:30:36.007183 systemd[1]: Started cri-containerd-bf8b36391ee209688cbd6eb69c36a3570883654604b8b8abed6065c757127aae.scope - libcontainer container bf8b36391ee209688cbd6eb69c36a3570883654604b8b8abed6065c757127aae. Jan 14 00:30:36.069000 audit: BPF prog-id=141 op=LOAD Jan 14 00:30:36.069000 audit[3040]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=2957 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266386233363339316565323039363838636264366562363963333661 Jan 14 00:30:36.070000 audit: BPF prog-id=142 op=LOAD Jan 14 00:30:36.070000 audit[3040]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=2957 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266386233363339316565323039363838636264366562363963333661 Jan 14 00:30:36.070000 audit: BPF prog-id=142 op=UNLOAD Jan 14 00:30:36.070000 audit[3040]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2957 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266386233363339316565323039363838636264366562363963333661 Jan 14 00:30:36.070000 audit: BPF prog-id=141 op=UNLOAD Jan 14 00:30:36.070000 audit[3040]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2957 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266386233363339316565323039363838636264366562363963333661 Jan 14 00:30:36.070000 audit: BPF prog-id=143 op=LOAD Jan 14 00:30:36.070000 audit[3040]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=2957 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266386233363339316565323039363838636264366562363963333661 Jan 14 00:30:36.106715 containerd[1612]: time="2026-01-14T00:30:36.106476665Z" level=info msg="StartContainer for \"bf8b36391ee209688cbd6eb69c36a3570883654604b8b8abed6065c757127aae\" returns successfully" Jan 14 00:30:36.308000 audit[3104]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.308000 audit[3104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffce87d4e0 a2=0 a3=1 items=0 ppid=3053 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.308000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 00:30:36.311000 audit[3106]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.311000 audit[3106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce7d4240 a2=0 a3=1 items=0 ppid=3053 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.311000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 00:30:36.314000 audit[3109]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.314000 audit[3109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd8145700 a2=0 a3=1 items=0 ppid=3053 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.314000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 00:30:36.324000 audit[3110]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=3110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.324000 audit[3110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcada8960 a2=0 a3=1 items=0 ppid=3053 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.324000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 00:30:36.327000 audit[3112]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.327000 audit[3112]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf9fca50 a2=0 a3=1 items=0 ppid=3053 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.327000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 00:30:36.329000 audit[3113]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.329000 audit[3113]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffecc738c0 a2=0 a3=1 items=0 ppid=3053 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.329000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 00:30:36.418000 audit[3114]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.418000 audit[3114]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff4568920 a2=0 a3=1 items=0 ppid=3053 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.418000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 00:30:36.435000 audit[3116]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.435000 audit[3116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcdc49fb0 a2=0 a3=1 items=0 ppid=3053 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.435000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 00:30:36.443000 audit[3119]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.443000 audit[3119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcd9266d0 a2=0 a3=1 items=0 ppid=3053 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.443000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 00:30:36.446000 audit[3120]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.446000 audit[3120]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffed7fa250 a2=0 a3=1 items=0 ppid=3053 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.446000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 00:30:36.451000 audit[3122]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.451000 audit[3122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd17b2640 a2=0 a3=1 items=0 ppid=3053 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.451000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 00:30:36.454000 audit[3123]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.454000 audit[3123]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe21158b0 a2=0 a3=1 items=0 ppid=3053 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.454000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 00:30:36.459000 audit[3125]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.459000 audit[3125]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffde233ab0 a2=0 a3=1 items=0 ppid=3053 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 00:30:36.470000 audit[3128]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.470000 audit[3128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd9d7a7b0 a2=0 a3=1 items=0 ppid=3053 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.470000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 00:30:36.472000 audit[3129]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.472000 audit[3129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea9eb080 a2=0 a3=1 items=0 ppid=3053 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.472000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 00:30:36.478000 audit[3131]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.478000 audit[3131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd0a4c790 a2=0 a3=1 items=0 ppid=3053 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.478000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 00:30:36.483000 audit[3132]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.483000 audit[3132]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd12bf780 a2=0 a3=1 items=0 ppid=3053 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.483000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 00:30:36.489000 audit[3134]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.489000 audit[3134]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe48c4050 a2=0 a3=1 items=0 ppid=3053 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.489000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 00:30:36.496000 audit[3137]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.496000 audit[3137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffda9b13f0 a2=0 a3=1 items=0 ppid=3053 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.496000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 00:30:36.507000 audit[3140]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.507000 audit[3140]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff041f510 a2=0 a3=1 items=0 ppid=3053 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 00:30:36.512000 audit[3141]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.512000 audit[3141]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdbd797f0 a2=0 a3=1 items=0 ppid=3053 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.512000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 00:30:36.518000 audit[3143]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.518000 audit[3143]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd5a95820 a2=0 a3=1 items=0 ppid=3053 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.518000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:30:36.524000 audit[3146]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.524000 audit[3146]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffeba92430 a2=0 a3=1 items=0 ppid=3053 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.524000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:30:36.527000 audit[3147]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.527000 audit[3147]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffec7c16f0 a2=0 a3=1 items=0 ppid=3053 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.527000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 00:30:36.532000 audit[3149]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:30:36.532000 audit[3149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffed947df0 a2=0 a3=1 items=0 ppid=3053 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.532000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 00:30:36.573000 audit[3155]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:36.573000 audit[3155]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd7edf2d0 a2=0 a3=1 items=0 ppid=3053 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.573000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:36.590000 audit[3155]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:36.590000 audit[3155]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffd7edf2d0 a2=0 a3=1 items=0 ppid=3053 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.590000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:36.595000 audit[3160]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3160 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.595000 audit[3160]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc1952cb0 a2=0 a3=1 items=0 ppid=3053 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.595000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 00:30:36.601000 audit[3162]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.601000 audit[3162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff77c7460 a2=0 a3=1 items=0 ppid=3053 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.601000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 00:30:36.609000 audit[3165]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3165 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.609000 audit[3165]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff8144300 a2=0 a3=1 items=0 ppid=3053 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.609000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 00:30:36.612000 audit[3166]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3166 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.612000 audit[3166]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff8fedcd0 a2=0 a3=1 items=0 ppid=3053 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.612000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 00:30:36.632000 audit[3168]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.632000 audit[3168]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe85cd140 a2=0 a3=1 items=0 ppid=3053 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.632000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 00:30:36.637000 audit[3169]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.637000 audit[3169]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9bf6610 a2=0 a3=1 items=0 ppid=3053 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.637000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 00:30:36.653000 audit[3171]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.653000 audit[3171]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffec9dc8e0 a2=0 a3=1 items=0 ppid=3053 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.653000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 00:30:36.667000 audit[3174]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.667000 audit[3174]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffed6f9dc0 a2=0 a3=1 items=0 ppid=3053 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 00:30:36.669000 audit[3175]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.669000 audit[3175]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc16244a0 a2=0 a3=1 items=0 ppid=3053 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.669000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 00:30:36.678000 audit[3177]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.678000 audit[3177]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe39bd390 a2=0 a3=1 items=0 ppid=3053 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.678000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 00:30:36.682000 audit[3178]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.682000 audit[3178]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd35a79f0 a2=0 a3=1 items=0 ppid=3053 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.682000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 00:30:36.689000 audit[3180]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.689000 audit[3180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff80032a0 a2=0 a3=1 items=0 ppid=3053 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.689000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 00:30:36.699000 audit[3183]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.699000 audit[3183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff5de2810 a2=0 a3=1 items=0 ppid=3053 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.699000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 00:30:36.707000 audit[3186]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.707000 audit[3186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff8d09c70 a2=0 a3=1 items=0 ppid=3053 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.707000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 00:30:36.711000 audit[3187]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.711000 audit[3187]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe919c5d0 a2=0 a3=1 items=0 ppid=3053 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.711000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 00:30:36.719000 audit[3189]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.719000 audit[3189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffdfa18610 a2=0 a3=1 items=0 ppid=3053 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.719000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:30:36.726000 audit[3192]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.726000 audit[3192]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcf970040 a2=0 a3=1 items=0 ppid=3053 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.726000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:30:36.728000 audit[3193]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3193 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.728000 audit[3193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc00a640 a2=0 a3=1 items=0 ppid=3053 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.728000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 00:30:36.733000 audit[3195]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3195 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.733000 audit[3195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe8bc79f0 a2=0 a3=1 items=0 ppid=3053 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.733000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 00:30:36.736000 audit[3196]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.736000 audit[3196]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd4fc3b0 a2=0 a3=1 items=0 ppid=3053 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.736000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 00:30:36.740000 audit[3198]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.740000 audit[3198]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff9384910 a2=0 a3=1 items=0 ppid=3053 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.740000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:30:36.747000 audit[3201]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:30:36.747000 audit[3201]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe519d430 a2=0 a3=1 items=0 ppid=3053 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.747000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:30:36.757000 audit[3203]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 00:30:36.757000 audit[3203]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd43373b0 a2=0 a3=1 items=0 ppid=3053 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.757000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:36.758000 audit[3203]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3203 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 00:30:36.758000 audit[3203]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd43373b0 a2=0 a3=1 items=0 ppid=3053 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:36.758000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:36.891312 kubelet[2886]: I0114 00:30:36.890670 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vg96w" podStartSLOduration=2.890548623 podStartE2EDuration="2.890548623s" podCreationTimestamp="2026-01-14 00:30:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:30:36.88999529 +0000 UTC m=+7.286091205" watchObservedRunningTime="2026-01-14 00:30:36.890548623 +0000 UTC m=+7.286644498" Jan 14 00:30:37.940697 sshd[2942]: Connection closed by authenticating user root 5.187.35.21 port 58056 [preauth] Jan 14 00:30:37.940000 audit[2942]: USER_ERR pid=2942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:37.941590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1984854993.mount: Deactivated successfully. Jan 14 00:30:37.952971 systemd[1]: sshd@19-91.99.0.249:22-5.187.35.21:58056.service: Deactivated successfully. Jan 14 00:30:37.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-91.99.0.249:22-5.187.35.21:58056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:37.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-91.99.0.249:22-5.187.35.21:58086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:37.986186 systemd[1]: Started sshd@20-91.99.0.249:22-5.187.35.21:58086.service - OpenSSH per-connection server daemon (5.187.35.21:58086). Jan 14 00:30:38.490132 containerd[1612]: time="2026-01-14T00:30:38.490064798Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:38.492015 containerd[1612]: time="2026-01-14T00:30:38.491944520Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=0" Jan 14 00:30:38.495106 containerd[1612]: time="2026-01-14T00:30:38.495025361Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:38.500934 containerd[1612]: time="2026-01-14T00:30:38.500055469Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:30:38.502095 containerd[1612]: time="2026-01-14T00:30:38.502042846Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.532094445s" Jan 14 00:30:38.502851 containerd[1612]: time="2026-01-14T00:30:38.502697333Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 14 00:30:38.511228 containerd[1612]: time="2026-01-14T00:30:38.511181795Z" level=info msg="CreateContainer within sandbox \"9baa613353c1749552e7472fef947d75a1fceb08132c0dfb09eb57748bd85cbd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 00:30:38.527439 containerd[1612]: time="2026-01-14T00:30:38.526408646Z" level=info msg="Container 9845a028f70fe7bfbbddbc2cd1d62da79bf0df3f6be91847ea2701d0ad58d687: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:30:38.535465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount417405107.mount: Deactivated successfully. Jan 14 00:30:38.539980 containerd[1612]: time="2026-01-14T00:30:38.539902620Z" level=info msg="CreateContainer within sandbox \"9baa613353c1749552e7472fef947d75a1fceb08132c0dfb09eb57748bd85cbd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9845a028f70fe7bfbbddbc2cd1d62da79bf0df3f6be91847ea2701d0ad58d687\"" Jan 14 00:30:38.542345 containerd[1612]: time="2026-01-14T00:30:38.542264790Z" level=info msg="StartContainer for \"9845a028f70fe7bfbbddbc2cd1d62da79bf0df3f6be91847ea2701d0ad58d687\"" Jan 14 00:30:38.543836 containerd[1612]: time="2026-01-14T00:30:38.543766559Z" level=info msg="connecting to shim 9845a028f70fe7bfbbddbc2cd1d62da79bf0df3f6be91847ea2701d0ad58d687" address="unix:///run/containerd/s/c1de1d4aec5203fb8e2adbff6cf5499c9e2aec30324f0a5d42e8ca52ad8f4033" protocol=ttrpc version=3 Jan 14 00:30:38.580270 systemd[1]: Started cri-containerd-9845a028f70fe7bfbbddbc2cd1d62da79bf0df3f6be91847ea2701d0ad58d687.scope - libcontainer container 9845a028f70fe7bfbbddbc2cd1d62da79bf0df3f6be91847ea2701d0ad58d687. Jan 14 00:30:38.595000 audit: BPF prog-id=144 op=LOAD Jan 14 00:30:38.596000 audit: BPF prog-id=145 op=LOAD Jan 14 00:30:38.596000 audit[3218]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2974 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:38.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343561303238663730666537626662626464626332636431643632 Jan 14 00:30:38.597000 audit: BPF prog-id=145 op=UNLOAD Jan 14 00:30:38.597000 audit[3218]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:38.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343561303238663730666537626662626464626332636431643632 Jan 14 00:30:38.597000 audit: BPF prog-id=146 op=LOAD Jan 14 00:30:38.597000 audit[3218]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2974 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:38.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343561303238663730666537626662626464626332636431643632 Jan 14 00:30:38.597000 audit: BPF prog-id=147 op=LOAD Jan 14 00:30:38.597000 audit[3218]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2974 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:38.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343561303238663730666537626662626464626332636431643632 Jan 14 00:30:38.597000 audit: BPF prog-id=147 op=UNLOAD Jan 14 00:30:38.597000 audit[3218]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:38.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343561303238663730666537626662626464626332636431643632 Jan 14 00:30:38.597000 audit: BPF prog-id=146 op=UNLOAD Jan 14 00:30:38.597000 audit[3218]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:38.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343561303238663730666537626662626464626332636431643632 Jan 14 00:30:38.597000 audit: BPF prog-id=148 op=LOAD Jan 14 00:30:38.597000 audit[3218]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2974 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:38.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343561303238663730666537626662626464626332636431643632 Jan 14 00:30:38.626457 containerd[1612]: time="2026-01-14T00:30:38.626390819Z" level=info msg="StartContainer for \"9845a028f70fe7bfbbddbc2cd1d62da79bf0df3f6be91847ea2701d0ad58d687\" returns successfully" Jan 14 00:30:39.800310 kubelet[2886]: I0114 00:30:39.800081 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-8zlr6" podStartSLOduration=2.262729683 podStartE2EDuration="4.799965743s" podCreationTimestamp="2026-01-14 00:30:35 +0000 UTC" firstStartedPulling="2026-01-14 00:30:35.967002754 +0000 UTC m=+6.363098589" lastFinishedPulling="2026-01-14 00:30:38.504238774 +0000 UTC m=+8.900334649" observedRunningTime="2026-01-14 00:30:38.900690558 +0000 UTC m=+9.296786393" watchObservedRunningTime="2026-01-14 00:30:39.799965743 +0000 UTC m=+10.196061578" Jan 14 00:30:41.347145 sshd[3215]: Connection closed by authenticating user root 5.187.35.21 port 58086 [preauth] Jan 14 00:30:41.352483 kernel: kauditd_printk_skb: 230 callbacks suppressed Jan 14 00:30:41.352541 kernel: audit: type=1109 audit(1768350641.348:548): pid=3215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:41.348000 audit[3215]: USER_ERR pid=3215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:41.354439 systemd[1]: sshd@20-91.99.0.249:22-5.187.35.21:58086.service: Deactivated successfully. Jan 14 00:30:41.360057 kernel: audit: type=1131 audit(1768350641.355:549): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-91.99.0.249:22-5.187.35.21:58086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:41.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-91.99.0.249:22-5.187.35.21:58086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:41.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-91.99.0.249:22-5.187.35.21:58144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:41.385728 systemd[1]: Started sshd@21-91.99.0.249:22-5.187.35.21:58144.service - OpenSSH per-connection server daemon (5.187.35.21:58144). Jan 14 00:30:41.392960 kernel: audit: type=1130 audit(1768350641.385:550): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-91.99.0.249:22-5.187.35.21:58144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:45.069717 sshd[3258]: Connection closed by authenticating user root 5.187.35.21 port 58144 [preauth] Jan 14 00:30:45.068000 audit[3258]: USER_ERR pid=3258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:45.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-91.99.0.249:22-5.187.35.21:58144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:45.075602 systemd[1]: sshd@21-91.99.0.249:22-5.187.35.21:58144.service: Deactivated successfully. Jan 14 00:30:45.082236 kernel: audit: type=1109 audit(1768350645.068:551): pid=3258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:45.082345 kernel: audit: type=1131 audit(1768350645.074:552): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-91.99.0.249:22-5.187.35.21:58144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:45.102051 systemd[1]: Started sshd@22-91.99.0.249:22-5.187.35.21:35938.service - OpenSSH per-connection server daemon (5.187.35.21:35938). Jan 14 00:30:45.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-91.99.0.249:22-5.187.35.21:35938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:45.107864 kernel: audit: type=1130 audit(1768350645.100:553): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-91.99.0.249:22-5.187.35.21:35938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:45.590752 sudo[1868]: pam_unix(sudo:session): session closed for user root Jan 14 00:30:45.589000 audit[1868]: USER_END pid=1868 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:30:45.589000 audit[1868]: CRED_DISP pid=1868 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:30:45.597213 kernel: audit: type=1106 audit(1768350645.589:554): pid=1868 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:30:45.597367 kernel: audit: type=1104 audit(1768350645.589:555): pid=1868 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:30:45.690853 sshd[1867]: Connection closed by 4.153.228.146 port 50372 Jan 14 00:30:45.693032 sshd-session[1863]: pam_unix(sshd:session): session closed for user core Jan 14 00:30:45.692000 audit[1863]: USER_END pid=1863 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:30:45.693000 audit[1863]: CRED_DISP pid=1863 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:30:45.708459 kernel: audit: type=1106 audit(1768350645.692:556): pid=1863 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:30:45.708570 kernel: audit: type=1104 audit(1768350645.693:557): pid=1863 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:30:45.709340 systemd[1]: sshd@8-91.99.0.249:22-4.153.228.146:50372.service: Deactivated successfully. Jan 14 00:30:45.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-91.99.0.249:22-4.153.228.146:50372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:45.715288 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 00:30:45.715664 systemd[1]: session-8.scope: Consumed 8.469s CPU time, 223.2M memory peak. Jan 14 00:30:45.719793 systemd-logind[1580]: Session 8 logged out. Waiting for processes to exit. Jan 14 00:30:45.721756 systemd-logind[1580]: Removed session 8. Jan 14 00:30:48.654893 sshd[3290]: Connection closed by authenticating user root 5.187.35.21 port 35938 [preauth] Jan 14 00:30:48.655000 audit[3290]: USER_ERR pid=3290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:48.658333 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:30:48.658556 kernel: audit: type=1109 audit(1768350648.655:559): pid=3290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:48.660419 systemd[1]: sshd@22-91.99.0.249:22-5.187.35.21:35938.service: Deactivated successfully. Jan 14 00:30:48.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-91.99.0.249:22-5.187.35.21:35938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:48.665438 kernel: audit: type=1131 audit(1768350648.660:560): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-91.99.0.249:22-5.187.35.21:35938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:48.690165 systemd[1]: Started sshd@23-91.99.0.249:22-5.187.35.21:35984.service - OpenSSH per-connection server daemon (5.187.35.21:35984). Jan 14 00:30:48.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-91.99.0.249:22-5.187.35.21:35984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:48.694837 kernel: audit: type=1130 audit(1768350648.689:561): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-91.99.0.249:22-5.187.35.21:35984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:49.808000 audit[3318]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:49.810846 kernel: audit: type=1325 audit(1768350649.808:562): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:49.808000 audit[3318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd0bc2820 a2=0 a3=1 items=0 ppid=3053 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:49.815855 kernel: audit: type=1300 audit(1768350649.808:562): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd0bc2820 a2=0 a3=1 items=0 ppid=3053 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:49.815977 kernel: audit: type=1327 audit(1768350649.808:562): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:49.808000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:49.819000 audit[3318]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:49.819000 audit[3318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd0bc2820 a2=0 a3=1 items=0 ppid=3053 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:49.824089 kernel: audit: type=1325 audit(1768350649.819:563): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:49.824203 kernel: audit: type=1300 audit(1768350649.819:563): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd0bc2820 a2=0 a3=1 items=0 ppid=3053 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:49.819000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:49.826290 kernel: audit: type=1327 audit(1768350649.819:563): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:49.842000 audit[3320]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3320 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:49.846849 kernel: audit: type=1325 audit(1768350649.842:564): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3320 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:49.842000 audit[3320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc72c2260 a2=0 a3=1 items=0 ppid=3053 pid=3320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:49.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:49.847000 audit[3320]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3320 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:49.847000 audit[3320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc72c2260 a2=0 a3=1 items=0 ppid=3053 pid=3320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:49.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:52.114660 sshd[3314]: Connection closed by authenticating user root 5.187.35.21 port 35984 [preauth] Jan 14 00:30:52.115000 audit[3314]: USER_ERR pid=3314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:52.120306 systemd[1]: sshd@23-91.99.0.249:22-5.187.35.21:35984.service: Deactivated successfully. Jan 14 00:30:52.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-91.99.0.249:22-5.187.35.21:35984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:52.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-91.99.0.249:22-5.187.35.21:31056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:52.144089 systemd[1]: Started sshd@24-91.99.0.249:22-5.187.35.21:31056.service - OpenSSH per-connection server daemon (5.187.35.21:31056). Jan 14 00:30:54.761000 audit[3330]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:54.764271 kernel: kauditd_printk_skb: 8 callbacks suppressed Jan 14 00:30:54.764329 kernel: audit: type=1325 audit(1768350654.761:569): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:54.761000 audit[3330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffcbd508f0 a2=0 a3=1 items=0 ppid=3053 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:54.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:54.774962 kernel: audit: type=1300 audit(1768350654.761:569): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffcbd508f0 a2=0 a3=1 items=0 ppid=3053 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:54.775052 kernel: audit: type=1327 audit(1768350654.761:569): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:54.776000 audit[3330]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:54.776000 audit[3330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcbd508f0 a2=0 a3=1 items=0 ppid=3053 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:54.782395 kernel: audit: type=1325 audit(1768350654.776:570): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:54.784102 kernel: audit: type=1300 audit(1768350654.776:570): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcbd508f0 a2=0 a3=1 items=0 ppid=3053 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:54.784178 kernel: audit: type=1327 audit(1768350654.776:570): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:54.776000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:54.822000 audit[3332]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3332 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:54.822000 audit[3332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe524f310 a2=0 a3=1 items=0 ppid=3053 pid=3332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:54.829252 kernel: audit: type=1325 audit(1768350654.822:571): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3332 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:54.829373 kernel: audit: type=1300 audit(1768350654.822:571): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe524f310 a2=0 a3=1 items=0 ppid=3053 pid=3332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:54.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:54.831850 kernel: audit: type=1327 audit(1768350654.822:571): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:54.828000 audit[3332]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3332 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:54.833342 kernel: audit: type=1325 audit(1768350654.828:572): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3332 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:54.828000 audit[3332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe524f310 a2=0 a3=1 items=0 ppid=3053 pid=3332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:54.828000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:55.026658 sshd[3324]: Connection closed by authenticating user root 5.187.35.21 port 31056 [preauth] Jan 14 00:30:55.027000 audit[3324]: USER_ERR pid=3324 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:55.033717 systemd[1]: sshd@24-91.99.0.249:22-5.187.35.21:31056.service: Deactivated successfully. Jan 14 00:30:55.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-91.99.0.249:22-5.187.35.21:31056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:55.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-91.99.0.249:22-5.187.35.21:31066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:55.057256 systemd[1]: Started sshd@25-91.99.0.249:22-5.187.35.21:31066.service - OpenSSH per-connection server daemon (5.187.35.21:31066). Jan 14 00:30:55.926000 audit[3341]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:55.926000 audit[3341]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc6194cb0 a2=0 a3=1 items=0 ppid=3053 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:55.926000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:55.930000 audit[3341]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:55.930000 audit[3341]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc6194cb0 a2=0 a3=1 items=0 ppid=3053 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:55.930000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:57.947593 sshd[3336]: Invalid user Antminer from 5.187.35.21 port 31066 Jan 14 00:30:58.751936 sshd[3336]: Connection closed by invalid user Antminer 5.187.35.21 port 31066 [preauth] Jan 14 00:30:58.751000 audit[3336]: USER_ERR pid=3336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:30:58.757010 systemd[1]: sshd@25-91.99.0.249:22-5.187.35.21:31066.service: Deactivated successfully. Jan 14 00:30:58.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-91.99.0.249:22-5.187.35.21:31066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:58.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-91.99.0.249:22-5.187.35.21:31096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:30:58.786254 systemd[1]: Started sshd@26-91.99.0.249:22-5.187.35.21:31096.service - OpenSSH per-connection server daemon (5.187.35.21:31096). Jan 14 00:30:59.050000 audit[3348]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:59.050000 audit[3348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffe979c70 a2=0 a3=1 items=0 ppid=3053 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.050000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:59.064000 audit[3348]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:59.064000 audit[3348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe979c70 a2=0 a3=1 items=0 ppid=3053 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.064000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:59.116082 systemd[1]: Created slice kubepods-besteffort-pod71de06d2_91b0_49cc_b191_d8db0e52db0a.slice - libcontainer container kubepods-besteffort-pod71de06d2_91b0_49cc_b191_d8db0e52db0a.slice. Jan 14 00:30:59.175971 kubelet[2886]: I0114 00:30:59.175734 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71de06d2-91b0-49cc-b191-d8db0e52db0a-tigera-ca-bundle\") pod \"calico-typha-585f87649-b5vxz\" (UID: \"71de06d2-91b0-49cc-b191-d8db0e52db0a\") " pod="calico-system/calico-typha-585f87649-b5vxz" Jan 14 00:30:59.175971 kubelet[2886]: I0114 00:30:59.175798 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/71de06d2-91b0-49cc-b191-d8db0e52db0a-typha-certs\") pod \"calico-typha-585f87649-b5vxz\" (UID: \"71de06d2-91b0-49cc-b191-d8db0e52db0a\") " pod="calico-system/calico-typha-585f87649-b5vxz" Jan 14 00:30:59.175971 kubelet[2886]: I0114 00:30:59.175833 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxzb8\" (UniqueName: \"kubernetes.io/projected/71de06d2-91b0-49cc-b191-d8db0e52db0a-kube-api-access-hxzb8\") pod \"calico-typha-585f87649-b5vxz\" (UID: \"71de06d2-91b0-49cc-b191-d8db0e52db0a\") " pod="calico-system/calico-typha-585f87649-b5vxz" Jan 14 00:30:59.308000 audit[3350]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3350 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:59.308000 audit[3350]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe863ffd0 a2=0 a3=1 items=0 ppid=3053 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.308000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:59.312000 audit[3350]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3350 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:30:59.312000 audit[3350]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe863ffd0 a2=0 a3=1 items=0 ppid=3053 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.312000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:30:59.421730 systemd[1]: Created slice kubepods-besteffort-pod0ea5006e_3581_47e9_9ec1_8ff4007597c1.slice - libcontainer container kubepods-besteffort-pod0ea5006e_3581_47e9_9ec1_8ff4007597c1.slice. Jan 14 00:30:59.422340 containerd[1612]: time="2026-01-14T00:30:59.422280741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-585f87649-b5vxz,Uid:71de06d2-91b0-49cc-b191-d8db0e52db0a,Namespace:calico-system,Attempt:0,}" Jan 14 00:30:59.468847 containerd[1612]: time="2026-01-14T00:30:59.468559480Z" level=info msg="connecting to shim 9f7f917d5b591593833b9d2b06d296a01d350e3d18fac22758feab9f8763c08e" address="unix:///run/containerd/s/0aa55e5fea9ba81073ccd9d9470a4501d7074b268f77fe3c226ccc4847865b29" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:30:59.479048 kubelet[2886]: I0114 00:30:59.478996 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ea5006e-3581-47e9-9ec1-8ff4007597c1-lib-modules\") pod \"calico-node-rr9n8\" (UID: \"0ea5006e-3581-47e9-9ec1-8ff4007597c1\") " pod="calico-system/calico-node-rr9n8" Jan 14 00:30:59.479488 kubelet[2886]: I0114 00:30:59.479465 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrwmg\" (UniqueName: \"kubernetes.io/projected/0ea5006e-3581-47e9-9ec1-8ff4007597c1-kube-api-access-qrwmg\") pod \"calico-node-rr9n8\" (UID: \"0ea5006e-3581-47e9-9ec1-8ff4007597c1\") " pod="calico-system/calico-node-rr9n8" Jan 14 00:30:59.480633 kubelet[2886]: I0114 00:30:59.480601 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0ea5006e-3581-47e9-9ec1-8ff4007597c1-cni-net-dir\") pod \"calico-node-rr9n8\" (UID: \"0ea5006e-3581-47e9-9ec1-8ff4007597c1\") " pod="calico-system/calico-node-rr9n8" Jan 14 00:30:59.482747 kubelet[2886]: I0114 00:30:59.482374 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0ea5006e-3581-47e9-9ec1-8ff4007597c1-node-certs\") pod \"calico-node-rr9n8\" (UID: \"0ea5006e-3581-47e9-9ec1-8ff4007597c1\") " pod="calico-system/calico-node-rr9n8" Jan 14 00:30:59.484054 kubelet[2886]: I0114 00:30:59.483914 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0ea5006e-3581-47e9-9ec1-8ff4007597c1-policysync\") pod \"calico-node-rr9n8\" (UID: \"0ea5006e-3581-47e9-9ec1-8ff4007597c1\") " pod="calico-system/calico-node-rr9n8" Jan 14 00:30:59.485647 kubelet[2886]: I0114 00:30:59.485362 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0ea5006e-3581-47e9-9ec1-8ff4007597c1-var-run-calico\") pod \"calico-node-rr9n8\" (UID: \"0ea5006e-3581-47e9-9ec1-8ff4007597c1\") " pod="calico-system/calico-node-rr9n8" Jan 14 00:30:59.485647 kubelet[2886]: I0114 00:30:59.485411 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ea5006e-3581-47e9-9ec1-8ff4007597c1-tigera-ca-bundle\") pod \"calico-node-rr9n8\" (UID: \"0ea5006e-3581-47e9-9ec1-8ff4007597c1\") " pod="calico-system/calico-node-rr9n8" Jan 14 00:30:59.485647 kubelet[2886]: I0114 00:30:59.485432 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ea5006e-3581-47e9-9ec1-8ff4007597c1-xtables-lock\") pod \"calico-node-rr9n8\" (UID: \"0ea5006e-3581-47e9-9ec1-8ff4007597c1\") " pod="calico-system/calico-node-rr9n8" Jan 14 00:30:59.485647 kubelet[2886]: I0114 00:30:59.485454 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0ea5006e-3581-47e9-9ec1-8ff4007597c1-cni-log-dir\") pod \"calico-node-rr9n8\" (UID: \"0ea5006e-3581-47e9-9ec1-8ff4007597c1\") " pod="calico-system/calico-node-rr9n8" Jan 14 00:30:59.485647 kubelet[2886]: I0114 00:30:59.485469 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0ea5006e-3581-47e9-9ec1-8ff4007597c1-var-lib-calico\") pod \"calico-node-rr9n8\" (UID: \"0ea5006e-3581-47e9-9ec1-8ff4007597c1\") " pod="calico-system/calico-node-rr9n8" Jan 14 00:30:59.486121 kubelet[2886]: I0114 00:30:59.485486 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0ea5006e-3581-47e9-9ec1-8ff4007597c1-cni-bin-dir\") pod \"calico-node-rr9n8\" (UID: \"0ea5006e-3581-47e9-9ec1-8ff4007597c1\") " pod="calico-system/calico-node-rr9n8" Jan 14 00:30:59.486121 kubelet[2886]: I0114 00:30:59.485507 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0ea5006e-3581-47e9-9ec1-8ff4007597c1-flexvol-driver-host\") pod \"calico-node-rr9n8\" (UID: \"0ea5006e-3581-47e9-9ec1-8ff4007597c1\") " pod="calico-system/calico-node-rr9n8" Jan 14 00:30:59.521307 systemd[1]: Started cri-containerd-9f7f917d5b591593833b9d2b06d296a01d350e3d18fac22758feab9f8763c08e.scope - libcontainer container 9f7f917d5b591593833b9d2b06d296a01d350e3d18fac22758feab9f8763c08e. Jan 14 00:30:59.572014 kubelet[2886]: E0114 00:30:59.571761 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:30:59.589845 kubelet[2886]: E0114 00:30:59.588328 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.589845 kubelet[2886]: W0114 00:30:59.588367 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.589845 kubelet[2886]: E0114 00:30:59.588395 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.589845 kubelet[2886]: E0114 00:30:59.588979 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.589845 kubelet[2886]: W0114 00:30:59.588999 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.589845 kubelet[2886]: E0114 00:30:59.589016 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.589845 kubelet[2886]: E0114 00:30:59.589230 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.589845 kubelet[2886]: W0114 00:30:59.589239 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.589845 kubelet[2886]: E0114 00:30:59.589250 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.590198 kubelet[2886]: E0114 00:30:59.590139 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.590198 kubelet[2886]: W0114 00:30:59.590152 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.590198 kubelet[2886]: E0114 00:30:59.590170 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.591933 kubelet[2886]: E0114 00:30:59.590636 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.591933 kubelet[2886]: W0114 00:30:59.590655 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.591933 kubelet[2886]: E0114 00:30:59.590683 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.591933 kubelet[2886]: E0114 00:30:59.590864 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.591933 kubelet[2886]: W0114 00:30:59.590876 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.591933 kubelet[2886]: E0114 00:30:59.590886 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.591933 kubelet[2886]: E0114 00:30:59.591029 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.591933 kubelet[2886]: W0114 00:30:59.591036 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.591933 kubelet[2886]: E0114 00:30:59.591045 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.591933 kubelet[2886]: E0114 00:30:59.591321 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.592443 kubelet[2886]: W0114 00:30:59.591330 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.592443 kubelet[2886]: E0114 00:30:59.591339 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.592443 kubelet[2886]: E0114 00:30:59.591512 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.592443 kubelet[2886]: W0114 00:30:59.591522 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.592443 kubelet[2886]: E0114 00:30:59.591581 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.592443 kubelet[2886]: E0114 00:30:59.592012 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.592443 kubelet[2886]: W0114 00:30:59.592024 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.592443 kubelet[2886]: E0114 00:30:59.592040 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.593038 kubelet[2886]: E0114 00:30:59.592652 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.593038 kubelet[2886]: W0114 00:30:59.592674 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.593038 kubelet[2886]: E0114 00:30:59.592688 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.595072 kubelet[2886]: E0114 00:30:59.593956 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.595072 kubelet[2886]: W0114 00:30:59.594931 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.595072 kubelet[2886]: E0114 00:30:59.594957 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.600494 kubelet[2886]: E0114 00:30:59.598941 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.601850 kubelet[2886]: W0114 00:30:59.600764 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.602059 kubelet[2886]: E0114 00:30:59.602030 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.602509 kubelet[2886]: E0114 00:30:59.602489 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.602653 kubelet[2886]: W0114 00:30:59.602633 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.604848 kubelet[2886]: E0114 00:30:59.603144 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.605324 kubelet[2886]: E0114 00:30:59.605303 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.605535 kubelet[2886]: W0114 00:30:59.605411 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.605535 kubelet[2886]: E0114 00:30:59.605438 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.605804 kubelet[2886]: E0114 00:30:59.605782 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.606076 kubelet[2886]: W0114 00:30:59.605943 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.606076 kubelet[2886]: E0114 00:30:59.605968 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.606301 kubelet[2886]: E0114 00:30:59.606286 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.606467 kubelet[2886]: W0114 00:30:59.606372 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.606467 kubelet[2886]: E0114 00:30:59.606395 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.606657 kubelet[2886]: E0114 00:30:59.606643 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.606937 kubelet[2886]: W0114 00:30:59.606724 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.606937 kubelet[2886]: E0114 00:30:59.606743 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.609361 kubelet[2886]: E0114 00:30:59.609224 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.609361 kubelet[2886]: W0114 00:30:59.609246 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.609361 kubelet[2886]: E0114 00:30:59.609269 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.609798 kubelet[2886]: E0114 00:30:59.609781 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.610042 kubelet[2886]: W0114 00:30:59.609905 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.610042 kubelet[2886]: E0114 00:30:59.609927 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.611105 kubelet[2886]: E0114 00:30:59.611073 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.611228 kubelet[2886]: W0114 00:30:59.611211 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.611291 kubelet[2886]: E0114 00:30:59.611279 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.611575 kubelet[2886]: E0114 00:30:59.611559 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.611000 audit: BPF prog-id=149 op=LOAD Jan 14 00:30:59.612288 kubelet[2886]: W0114 00:30:59.611656 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.612288 kubelet[2886]: E0114 00:30:59.611678 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.613887 kubelet[2886]: E0114 00:30:59.613068 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.613887 kubelet[2886]: W0114 00:30:59.613095 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.613887 kubelet[2886]: E0114 00:30:59.613115 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.614251 kubelet[2886]: E0114 00:30:59.614232 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.613000 audit: BPF prog-id=150 op=LOAD Jan 14 00:30:59.613000 audit[3375]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3363 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966376639313764356235393135393338333362396432623036643239 Jan 14 00:30:59.613000 audit: BPF prog-id=150 op=UNLOAD Jan 14 00:30:59.613000 audit[3375]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3363 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966376639313764356235393135393338333362396432623036643239 Jan 14 00:30:59.613000 audit: BPF prog-id=151 op=LOAD Jan 14 00:30:59.613000 audit[3375]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3363 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966376639313764356235393135393338333362396432623036643239 Jan 14 00:30:59.613000 audit: BPF prog-id=152 op=LOAD Jan 14 00:30:59.613000 audit[3375]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3363 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966376639313764356235393135393338333362396432623036643239 Jan 14 00:30:59.613000 audit: BPF prog-id=152 op=UNLOAD Jan 14 00:30:59.613000 audit[3375]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3363 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966376639313764356235393135393338333362396432623036643239 Jan 14 00:30:59.613000 audit: BPF prog-id=151 op=UNLOAD Jan 14 00:30:59.613000 audit[3375]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3363 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966376639313764356235393135393338333362396432623036643239 Jan 14 00:30:59.614000 audit: BPF prog-id=153 op=LOAD Jan 14 00:30:59.614000 audit[3375]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3363 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966376639313764356235393135393338333362396432623036643239 Jan 14 00:30:59.615945 kubelet[2886]: W0114 00:30:59.614324 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.615945 kubelet[2886]: E0114 00:30:59.614350 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.619005 kubelet[2886]: E0114 00:30:59.616991 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.619005 kubelet[2886]: W0114 00:30:59.617019 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.619005 kubelet[2886]: E0114 00:30:59.617043 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.619471 kubelet[2886]: E0114 00:30:59.619294 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.619471 kubelet[2886]: W0114 00:30:59.619322 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.619471 kubelet[2886]: E0114 00:30:59.619344 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.619673 kubelet[2886]: E0114 00:30:59.619658 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.619734 kubelet[2886]: W0114 00:30:59.619722 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.619791 kubelet[2886]: E0114 00:30:59.619781 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.620116 kubelet[2886]: E0114 00:30:59.620101 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.620376 kubelet[2886]: W0114 00:30:59.620190 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.620376 kubelet[2886]: E0114 00:30:59.620216 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.621855 kubelet[2886]: E0114 00:30:59.620774 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.622032 kubelet[2886]: W0114 00:30:59.622004 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.622118 kubelet[2886]: E0114 00:30:59.622102 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.622496 kubelet[2886]: E0114 00:30:59.622483 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.622675 kubelet[2886]: W0114 00:30:59.622624 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.622675 kubelet[2886]: E0114 00:30:59.622645 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.649328 kubelet[2886]: E0114 00:30:59.649267 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.649328 kubelet[2886]: W0114 00:30:59.649296 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.649633 kubelet[2886]: E0114 00:30:59.649612 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.650394 kubelet[2886]: E0114 00:30:59.650278 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.650776 kubelet[2886]: W0114 00:30:59.650527 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.651528 kubelet[2886]: E0114 00:30:59.650846 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.652388 kubelet[2886]: E0114 00:30:59.652318 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.652388 kubelet[2886]: W0114 00:30:59.652345 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.652388 kubelet[2886]: E0114 00:30:59.652369 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.653827 kubelet[2886]: E0114 00:30:59.653571 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.654438 kubelet[2886]: W0114 00:30:59.653802 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.654438 kubelet[2886]: E0114 00:30:59.654346 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.655298 kubelet[2886]: E0114 00:30:59.655274 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.655638 kubelet[2886]: W0114 00:30:59.655472 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.655638 kubelet[2886]: E0114 00:30:59.655509 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.656408 kubelet[2886]: E0114 00:30:59.656365 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.656687 kubelet[2886]: W0114 00:30:59.656573 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.656687 kubelet[2886]: E0114 00:30:59.656599 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.657383 kubelet[2886]: E0114 00:30:59.657313 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.657383 kubelet[2886]: W0114 00:30:59.657330 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.657383 kubelet[2886]: E0114 00:30:59.657348 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.658293 kubelet[2886]: E0114 00:30:59.658222 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.658293 kubelet[2886]: W0114 00:30:59.658245 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.658293 kubelet[2886]: E0114 00:30:59.658263 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.658774 kubelet[2886]: E0114 00:30:59.658639 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.658774 kubelet[2886]: W0114 00:30:59.658662 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.658774 kubelet[2886]: E0114 00:30:59.658678 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.658923 kubelet[2886]: E0114 00:30:59.658903 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.658923 kubelet[2886]: W0114 00:30:59.658918 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.659010 kubelet[2886]: E0114 00:30:59.658928 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.659883 kubelet[2886]: E0114 00:30:59.659061 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.659883 kubelet[2886]: W0114 00:30:59.659069 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.659883 kubelet[2886]: E0114 00:30:59.659077 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.659883 kubelet[2886]: E0114 00:30:59.659296 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.659883 kubelet[2886]: W0114 00:30:59.659308 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.659883 kubelet[2886]: E0114 00:30:59.659319 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.660208 kubelet[2886]: E0114 00:30:59.660133 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.660208 kubelet[2886]: W0114 00:30:59.660153 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.660208 kubelet[2886]: E0114 00:30:59.660174 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.660652 kubelet[2886]: E0114 00:30:59.660619 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.660652 kubelet[2886]: W0114 00:30:59.660645 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.660652 kubelet[2886]: E0114 00:30:59.660666 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.661901 kubelet[2886]: E0114 00:30:59.661868 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.661901 kubelet[2886]: W0114 00:30:59.661896 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.662149 kubelet[2886]: E0114 00:30:59.661924 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.662285 kubelet[2886]: E0114 00:30:59.662265 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.662285 kubelet[2886]: W0114 00:30:59.662282 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.662285 kubelet[2886]: E0114 00:30:59.662295 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.663003 kubelet[2886]: E0114 00:30:59.662979 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.663240 kubelet[2886]: W0114 00:30:59.663006 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.663240 kubelet[2886]: E0114 00:30:59.663023 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.663431 kubelet[2886]: E0114 00:30:59.663411 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.663431 kubelet[2886]: W0114 00:30:59.663428 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.663497 kubelet[2886]: E0114 00:30:59.663443 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.663982 kubelet[2886]: E0114 00:30:59.663956 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.663982 kubelet[2886]: W0114 00:30:59.663973 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.664096 kubelet[2886]: E0114 00:30:59.663986 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.664936 kubelet[2886]: E0114 00:30:59.664913 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.664936 kubelet[2886]: W0114 00:30:59.664931 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.665044 kubelet[2886]: E0114 00:30:59.664946 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.665152 kubelet[2886]: E0114 00:30:59.665139 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.665152 kubelet[2886]: W0114 00:30:59.665150 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.665233 kubelet[2886]: E0114 00:30:59.665160 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.687200 kubelet[2886]: E0114 00:30:59.687125 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.687200 kubelet[2886]: W0114 00:30:59.687158 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.687582 kubelet[2886]: E0114 00:30:59.687441 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.687582 kubelet[2886]: I0114 00:30:59.687502 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4ec9a31-66c9-4bf7-a831-6c170af7211c-kubelet-dir\") pod \"csi-node-driver-8rkfb\" (UID: \"c4ec9a31-66c9-4bf7-a831-6c170af7211c\") " pod="calico-system/csi-node-driver-8rkfb" Jan 14 00:30:59.687979 kubelet[2886]: E0114 00:30:59.687934 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.687979 kubelet[2886]: W0114 00:30:59.687952 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.688167 kubelet[2886]: E0114 00:30:59.688095 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.688167 kubelet[2886]: I0114 00:30:59.688138 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c4ec9a31-66c9-4bf7-a831-6c170af7211c-socket-dir\") pod \"csi-node-driver-8rkfb\" (UID: \"c4ec9a31-66c9-4bf7-a831-6c170af7211c\") " pod="calico-system/csi-node-driver-8rkfb" Jan 14 00:30:59.689002 kubelet[2886]: E0114 00:30:59.688954 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.689002 kubelet[2886]: W0114 00:30:59.688988 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.689259 kubelet[2886]: E0114 00:30:59.689009 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.689394 kubelet[2886]: E0114 00:30:59.689372 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.689394 kubelet[2886]: W0114 00:30:59.689392 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.689492 kubelet[2886]: E0114 00:30:59.689407 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.689757 kubelet[2886]: E0114 00:30:59.689724 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.689757 kubelet[2886]: W0114 00:30:59.689741 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.689757 kubelet[2886]: E0114 00:30:59.689754 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.689757 kubelet[2886]: I0114 00:30:59.689784 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv2gz\" (UniqueName: \"kubernetes.io/projected/c4ec9a31-66c9-4bf7-a831-6c170af7211c-kube-api-access-zv2gz\") pod \"csi-node-driver-8rkfb\" (UID: \"c4ec9a31-66c9-4bf7-a831-6c170af7211c\") " pod="calico-system/csi-node-driver-8rkfb" Jan 14 00:30:59.690268 kubelet[2886]: E0114 00:30:59.690212 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.690268 kubelet[2886]: W0114 00:30:59.690234 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.690268 kubelet[2886]: E0114 00:30:59.690251 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.690629 kubelet[2886]: E0114 00:30:59.690601 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.690629 kubelet[2886]: W0114 00:30:59.690622 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.690757 kubelet[2886]: E0114 00:30:59.690639 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.692064 kubelet[2886]: E0114 00:30:59.692026 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.692064 kubelet[2886]: W0114 00:30:59.692058 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.692207 kubelet[2886]: E0114 00:30:59.692083 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.693223 kubelet[2886]: E0114 00:30:59.693186 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.693223 kubelet[2886]: W0114 00:30:59.693219 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.693499 kubelet[2886]: E0114 00:30:59.693245 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.694060 kubelet[2886]: E0114 00:30:59.694028 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.694060 kubelet[2886]: W0114 00:30:59.694056 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.694291 kubelet[2886]: E0114 00:30:59.694075 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.694291 kubelet[2886]: I0114 00:30:59.694128 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c4ec9a31-66c9-4bf7-a831-6c170af7211c-varrun\") pod \"csi-node-driver-8rkfb\" (UID: \"c4ec9a31-66c9-4bf7-a831-6c170af7211c\") " pod="calico-system/csi-node-driver-8rkfb" Jan 14 00:30:59.695195 kubelet[2886]: E0114 00:30:59.695109 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.695195 kubelet[2886]: W0114 00:30:59.695143 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.695195 kubelet[2886]: E0114 00:30:59.695163 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.695195 kubelet[2886]: I0114 00:30:59.695195 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c4ec9a31-66c9-4bf7-a831-6c170af7211c-registration-dir\") pod \"csi-node-driver-8rkfb\" (UID: \"c4ec9a31-66c9-4bf7-a831-6c170af7211c\") " pod="calico-system/csi-node-driver-8rkfb" Jan 14 00:30:59.695480 kubelet[2886]: E0114 00:30:59.695450 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.695480 kubelet[2886]: W0114 00:30:59.695465 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.695480 kubelet[2886]: E0114 00:30:59.695477 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.695726 kubelet[2886]: E0114 00:30:59.695669 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.695726 kubelet[2886]: W0114 00:30:59.695678 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.695726 kubelet[2886]: E0114 00:30:59.695688 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.696069 kubelet[2886]: E0114 00:30:59.695969 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.696069 kubelet[2886]: W0114 00:30:59.695987 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.696069 kubelet[2886]: E0114 00:30:59.695999 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.696964 kubelet[2886]: E0114 00:30:59.696772 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.696964 kubelet[2886]: W0114 00:30:59.696797 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.696964 kubelet[2886]: E0114 00:30:59.696831 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.731480 containerd[1612]: time="2026-01-14T00:30:59.731421882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rr9n8,Uid:0ea5006e-3581-47e9-9ec1-8ff4007597c1,Namespace:calico-system,Attempt:0,}" Jan 14 00:30:59.768361 containerd[1612]: time="2026-01-14T00:30:59.768297506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-585f87649-b5vxz,Uid:71de06d2-91b0-49cc-b191-d8db0e52db0a,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f7f917d5b591593833b9d2b06d296a01d350e3d18fac22758feab9f8763c08e\"" Jan 14 00:30:59.772859 containerd[1612]: time="2026-01-14T00:30:59.772388460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 00:30:59.797149 kubelet[2886]: E0114 00:30:59.797116 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.797494 kubelet[2886]: W0114 00:30:59.797310 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.797494 kubelet[2886]: E0114 00:30:59.797340 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.797996 kubelet[2886]: E0114 00:30:59.797975 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.798090 kubelet[2886]: W0114 00:30:59.798057 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.798264 kubelet[2886]: E0114 00:30:59.798183 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.798899 kubelet[2886]: E0114 00:30:59.798779 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.798961 kubelet[2886]: W0114 00:30:59.798934 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.799351 kubelet[2886]: E0114 00:30:59.798960 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.800142 kubelet[2886]: E0114 00:30:59.799688 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.800142 kubelet[2886]: W0114 00:30:59.799709 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.800142 kubelet[2886]: E0114 00:30:59.799727 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.800142 kubelet[2886]: E0114 00:30:59.800054 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.800142 kubelet[2886]: W0114 00:30:59.800066 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.800142 kubelet[2886]: E0114 00:30:59.800106 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.800685 kubelet[2886]: E0114 00:30:59.800322 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.800685 kubelet[2886]: W0114 00:30:59.800367 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.800685 kubelet[2886]: E0114 00:30:59.800427 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.801934 kubelet[2886]: E0114 00:30:59.800782 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.801934 kubelet[2886]: W0114 00:30:59.800902 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.801934 kubelet[2886]: E0114 00:30:59.800922 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.803021 kubelet[2886]: E0114 00:30:59.802997 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.803154 kubelet[2886]: W0114 00:30:59.803136 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.803297 kubelet[2886]: E0114 00:30:59.803222 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.803578 kubelet[2886]: E0114 00:30:59.803522 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.803578 kubelet[2886]: W0114 00:30:59.803559 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.803578 kubelet[2886]: E0114 00:30:59.803574 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.803942 kubelet[2886]: E0114 00:30:59.803846 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.803942 kubelet[2886]: W0114 00:30:59.803860 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.803942 kubelet[2886]: E0114 00:30:59.803870 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.804514 kubelet[2886]: E0114 00:30:59.804449 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.804643 kubelet[2886]: W0114 00:30:59.804615 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.804764 kubelet[2886]: E0114 00:30:59.804713 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.805727 kubelet[2886]: E0114 00:30:59.805694 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.805727 kubelet[2886]: W0114 00:30:59.805718 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.806041 kubelet[2886]: E0114 00:30:59.805742 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.807505 containerd[1612]: time="2026-01-14T00:30:59.807019778Z" level=info msg="connecting to shim 464d20972f0a618f7dc4ba36bdb294268305c2dc362648806d1d7b6398316a8f" address="unix:///run/containerd/s/70a003d5b456dc88b9f6e675d322c26dc2a5c24c7e5e0ae30460510e0c9fdbe1" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:30:59.807611 kubelet[2886]: E0114 00:30:59.807364 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.807611 kubelet[2886]: W0114 00:30:59.807410 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.807611 kubelet[2886]: E0114 00:30:59.807435 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.807874 kubelet[2886]: E0114 00:30:59.807753 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.807874 kubelet[2886]: W0114 00:30:59.807774 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.807874 kubelet[2886]: E0114 00:30:59.807787 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.807977 kubelet[2886]: E0114 00:30:59.807954 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.807977 kubelet[2886]: W0114 00:30:59.807962 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.807977 kubelet[2886]: E0114 00:30:59.807971 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.808986 kubelet[2886]: E0114 00:30:59.808953 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.808986 kubelet[2886]: W0114 00:30:59.808978 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.809261 kubelet[2886]: E0114 00:30:59.808996 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.809493 kubelet[2886]: E0114 00:30:59.809474 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.809665 kubelet[2886]: W0114 00:30:59.809573 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.809665 kubelet[2886]: E0114 00:30:59.809596 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.810097 kubelet[2886]: E0114 00:30:59.809975 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.810097 kubelet[2886]: W0114 00:30:59.810017 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.810097 kubelet[2886]: E0114 00:30:59.810033 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.810518 kubelet[2886]: E0114 00:30:59.810400 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.810518 kubelet[2886]: W0114 00:30:59.810414 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.810518 kubelet[2886]: E0114 00:30:59.810427 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.810938 kubelet[2886]: E0114 00:30:59.810919 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.811065 kubelet[2886]: W0114 00:30:59.811018 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.811243 kubelet[2886]: E0114 00:30:59.811042 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.811877 kubelet[2886]: E0114 00:30:59.811431 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.812219 kubelet[2886]: W0114 00:30:59.812145 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.812219 kubelet[2886]: E0114 00:30:59.812184 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.813177 kubelet[2886]: E0114 00:30:59.813000 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.813177 kubelet[2886]: W0114 00:30:59.813084 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.813720 kubelet[2886]: E0114 00:30:59.813492 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.816726 kubelet[2886]: E0114 00:30:59.816339 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.817444 kubelet[2886]: W0114 00:30:59.817241 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.818711 kubelet[2886]: E0114 00:30:59.817841 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.820501 kubelet[2886]: E0114 00:30:59.820292 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.821569 kubelet[2886]: W0114 00:30:59.821000 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.821569 kubelet[2886]: E0114 00:30:59.821039 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.823187 kubelet[2886]: E0114 00:30:59.823090 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.826747 kubelet[2886]: W0114 00:30:59.824890 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.826747 kubelet[2886]: E0114 00:30:59.824931 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.840699 kubelet[2886]: E0114 00:30:59.840585 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:30:59.840699 kubelet[2886]: W0114 00:30:59.840618 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:30:59.840699 kubelet[2886]: E0114 00:30:59.840647 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:30:59.847113 systemd[1]: Started cri-containerd-464d20972f0a618f7dc4ba36bdb294268305c2dc362648806d1d7b6398316a8f.scope - libcontainer container 464d20972f0a618f7dc4ba36bdb294268305c2dc362648806d1d7b6398316a8f. Jan 14 00:30:59.883115 kernel: kauditd_printk_skb: 48 callbacks suppressed Jan 14 00:30:59.883401 kernel: audit: type=1334 audit(1768350659.880:593): prog-id=154 op=LOAD Jan 14 00:30:59.880000 audit: BPF prog-id=154 op=LOAD Jan 14 00:30:59.883000 audit: BPF prog-id=155 op=LOAD Jan 14 00:30:59.885563 kernel: audit: type=1334 audit(1768350659.883:594): prog-id=155 op=LOAD Jan 14 00:30:59.888121 kernel: audit: type=1300 audit(1768350659.883:594): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3483 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.883000 audit[3517]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3483 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436346432303937326630613631386637646334626133366264623239 Jan 14 00:30:59.891044 kernel: audit: type=1327 audit(1768350659.883:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436346432303937326630613631386637646334626133366264623239 Jan 14 00:30:59.895886 kernel: audit: type=1334 audit(1768350659.883:595): prog-id=155 op=UNLOAD Jan 14 00:30:59.896017 kernel: audit: type=1300 audit(1768350659.883:595): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3483 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.883000 audit: BPF prog-id=155 op=UNLOAD Jan 14 00:30:59.883000 audit[3517]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3483 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436346432303937326630613631386637646334626133366264623239 Jan 14 00:30:59.899125 kernel: audit: type=1327 audit(1768350659.883:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436346432303937326630613631386637646334626133366264623239 Jan 14 00:30:59.883000 audit: BPF prog-id=156 op=LOAD Jan 14 00:30:59.899973 kernel: audit: type=1334 audit(1768350659.883:596): prog-id=156 op=LOAD Jan 14 00:30:59.883000 audit[3517]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3483 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.903142 kernel: audit: type=1300 audit(1768350659.883:596): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3483 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436346432303937326630613631386637646334626133366264623239 Jan 14 00:30:59.906334 kernel: audit: type=1327 audit(1768350659.883:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436346432303937326630613631386637646334626133366264623239 Jan 14 00:30:59.889000 audit: BPF prog-id=157 op=LOAD Jan 14 00:30:59.889000 audit[3517]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3483 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436346432303937326630613631386637646334626133366264623239 Jan 14 00:30:59.890000 audit: BPF prog-id=157 op=UNLOAD Jan 14 00:30:59.890000 audit[3517]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3483 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436346432303937326630613631386637646334626133366264623239 Jan 14 00:30:59.890000 audit: BPF prog-id=156 op=UNLOAD Jan 14 00:30:59.890000 audit[3517]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3483 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436346432303937326630613631386637646334626133366264623239 Jan 14 00:30:59.890000 audit: BPF prog-id=158 op=LOAD Jan 14 00:30:59.890000 audit[3517]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3483 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:30:59.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436346432303937326630613631386637646334626133366264623239 Jan 14 00:30:59.940575 containerd[1612]: time="2026-01-14T00:30:59.940443120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rr9n8,Uid:0ea5006e-3581-47e9-9ec1-8ff4007597c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"464d20972f0a618f7dc4ba36bdb294268305c2dc362648806d1d7b6398316a8f\"" Jan 14 00:31:00.343000 audit[3549]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3549 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:00.343000 audit[3549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe9a2e360 a2=0 a3=1 items=0 ppid=3053 pid=3549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:00.343000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:00.349000 audit[3549]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3549 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:00.349000 audit[3549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe9a2e360 a2=0 a3=1 items=0 ppid=3053 pid=3549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:00.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:01.374473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1557400272.mount: Deactivated successfully. Jan 14 00:31:01.662393 sshd[3345]: Invalid user Antminer from 5.187.35.21 port 31096 Jan 14 00:31:01.767218 kubelet[2886]: E0114 00:31:01.767154 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:31:02.233835 sshd[3345]: Connection closed by invalid user Antminer 5.187.35.21 port 31096 [preauth] Jan 14 00:31:02.233000 audit[3345]: USER_ERR pid=3345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:31:02.239275 systemd[1]: sshd@26-91.99.0.249:22-5.187.35.21:31096.service: Deactivated successfully. Jan 14 00:31:02.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-91.99.0.249:22-5.187.35.21:31096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:02.263689 systemd[1]: Started sshd@27-91.99.0.249:22-5.187.35.21:29248.service - OpenSSH per-connection server daemon (5.187.35.21:29248). Jan 14 00:31:02.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-91.99.0.249:22-5.187.35.21:29248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:02.386678 containerd[1612]: time="2026-01-14T00:31:02.386160521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:31:02.389119 containerd[1612]: time="2026-01-14T00:31:02.389000780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 14 00:31:02.391192 containerd[1612]: time="2026-01-14T00:31:02.390183562Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:31:02.392531 containerd[1612]: time="2026-01-14T00:31:02.392471368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:31:02.393250 containerd[1612]: time="2026-01-14T00:31:02.393212132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.620775754s" Jan 14 00:31:02.393250 containerd[1612]: time="2026-01-14T00:31:02.393250730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 14 00:31:02.395796 containerd[1612]: time="2026-01-14T00:31:02.394985244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 00:31:02.415070 containerd[1612]: time="2026-01-14T00:31:02.415015572Z" level=info msg="CreateContainer within sandbox \"9f7f917d5b591593833b9d2b06d296a01d350e3d18fac22758feab9f8763c08e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 00:31:02.427118 containerd[1612]: time="2026-01-14T00:31:02.427051136Z" level=info msg="Container e88a39138a0a07cedddaab375b953dfe01d6a50f8ea3862dfc4b14331cf076b1: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:31:02.450170 containerd[1612]: time="2026-01-14T00:31:02.450097035Z" level=info msg="CreateContainer within sandbox \"9f7f917d5b591593833b9d2b06d296a01d350e3d18fac22758feab9f8763c08e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e88a39138a0a07cedddaab375b953dfe01d6a50f8ea3862dfc4b14331cf076b1\"" Jan 14 00:31:02.452709 containerd[1612]: time="2026-01-14T00:31:02.451440928Z" level=info msg="StartContainer for \"e88a39138a0a07cedddaab375b953dfe01d6a50f8ea3862dfc4b14331cf076b1\"" Jan 14 00:31:02.453732 containerd[1612]: time="2026-01-14T00:31:02.453665178Z" level=info msg="connecting to shim e88a39138a0a07cedddaab375b953dfe01d6a50f8ea3862dfc4b14331cf076b1" address="unix:///run/containerd/s/0aa55e5fea9ba81073ccd9d9470a4501d7074b268f77fe3c226ccc4847865b29" protocol=ttrpc version=3 Jan 14 00:31:02.488090 systemd[1]: Started cri-containerd-e88a39138a0a07cedddaab375b953dfe01d6a50f8ea3862dfc4b14331cf076b1.scope - libcontainer container e88a39138a0a07cedddaab375b953dfe01d6a50f8ea3862dfc4b14331cf076b1. Jan 14 00:31:02.508000 audit: BPF prog-id=159 op=LOAD Jan 14 00:31:02.509000 audit: BPF prog-id=160 op=LOAD Jan 14 00:31:02.509000 audit[3565]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3363 pid=3565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:02.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538386133393133386130613037636564646461616233373562393533 Jan 14 00:31:02.509000 audit: BPF prog-id=160 op=UNLOAD Jan 14 00:31:02.509000 audit[3565]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3363 pid=3565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:02.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538386133393133386130613037636564646461616233373562393533 Jan 14 00:31:02.509000 audit: BPF prog-id=161 op=LOAD Jan 14 00:31:02.509000 audit[3565]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3363 pid=3565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:02.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538386133393133386130613037636564646461616233373562393533 Jan 14 00:31:02.509000 audit: BPF prog-id=162 op=LOAD Jan 14 00:31:02.509000 audit[3565]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3363 pid=3565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:02.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538386133393133386130613037636564646461616233373562393533 Jan 14 00:31:02.510000 audit: BPF prog-id=162 op=UNLOAD Jan 14 00:31:02.510000 audit[3565]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3363 pid=3565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:02.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538386133393133386130613037636564646461616233373562393533 Jan 14 00:31:02.510000 audit: BPF prog-id=161 op=UNLOAD Jan 14 00:31:02.510000 audit[3565]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3363 pid=3565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:02.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538386133393133386130613037636564646461616233373562393533 Jan 14 00:31:02.510000 audit: BPF prog-id=163 op=LOAD Jan 14 00:31:02.510000 audit[3565]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3363 pid=3565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:02.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538386133393133386130613037636564646461616233373562393533 Jan 14 00:31:02.546700 containerd[1612]: time="2026-01-14T00:31:02.546559738Z" level=info msg="StartContainer for \"e88a39138a0a07cedddaab375b953dfe01d6a50f8ea3862dfc4b14331cf076b1\" returns successfully" Jan 14 00:31:02.986250 kubelet[2886]: E0114 00:31:02.986194 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:02.986250 kubelet[2886]: W0114 00:31:02.986230 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:02.986250 kubelet[2886]: E0114 00:31:02.986258 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:02.987107 kubelet[2886]: E0114 00:31:02.986701 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:02.987107 kubelet[2886]: W0114 00:31:02.986717 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:02.987107 kubelet[2886]: E0114 00:31:02.986763 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:02.988066 kubelet[2886]: E0114 00:31:02.988037 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:02.988066 kubelet[2886]: W0114 00:31:02.988063 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:02.988165 kubelet[2886]: E0114 00:31:02.988082 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:02.988264 kubelet[2886]: E0114 00:31:02.988239 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:02.988264 kubelet[2886]: W0114 00:31:02.988251 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:02.988264 kubelet[2886]: E0114 00:31:02.988259 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:02.988471 kubelet[2886]: E0114 00:31:02.988453 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:02.988471 kubelet[2886]: W0114 00:31:02.988467 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:02.988527 kubelet[2886]: E0114 00:31:02.988476 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:02.988620 kubelet[2886]: E0114 00:31:02.988596 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:02.988620 kubelet[2886]: W0114 00:31:02.988608 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:02.988620 kubelet[2886]: E0114 00:31:02.988615 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:02.988769 kubelet[2886]: E0114 00:31:02.988715 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:02.988769 kubelet[2886]: W0114 00:31:02.988721 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:02.988769 kubelet[2886]: E0114 00:31:02.988739 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:02.988886 kubelet[2886]: E0114 00:31:02.988869 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:02.988886 kubelet[2886]: W0114 00:31:02.988881 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:02.988886 kubelet[2886]: E0114 00:31:02.988888 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:02.989980 kubelet[2886]: E0114 00:31:02.989860 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:02.989980 kubelet[2886]: W0114 00:31:02.989877 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:02.989980 kubelet[2886]: E0114 00:31:02.989895 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:02.990103 kubelet[2886]: E0114 00:31:02.990082 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:02.990103 kubelet[2886]: W0114 00:31:02.990100 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:02.990165 kubelet[2886]: E0114 00:31:02.990109 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:02.990372 kubelet[2886]: E0114 00:31:02.990352 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:02.990372 kubelet[2886]: W0114 00:31:02.990367 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:02.990450 kubelet[2886]: E0114 00:31:02.990377 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:02.990546 kubelet[2886]: E0114 00:31:02.990527 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:02.990546 kubelet[2886]: W0114 00:31:02.990540 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:02.990624 kubelet[2886]: E0114 00:31:02.990548 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:02.990805 kubelet[2886]: E0114 00:31:02.990783 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:02.990805 kubelet[2886]: W0114 00:31:02.990798 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:02.990888 kubelet[2886]: E0114 00:31:02.990829 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:02.991009 kubelet[2886]: E0114 00:31:02.990990 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:02.991009 kubelet[2886]: W0114 00:31:02.991004 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:02.991073 kubelet[2886]: E0114 00:31:02.991012 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:02.991168 kubelet[2886]: E0114 00:31:02.991148 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:02.991168 kubelet[2886]: W0114 00:31:02.991162 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:02.991231 kubelet[2886]: E0114 00:31:02.991174 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.032092 kubelet[2886]: E0114 00:31:03.032039 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.032092 kubelet[2886]: W0114 00:31:03.032072 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.032092 kubelet[2886]: E0114 00:31:03.032098 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.033376 kubelet[2886]: E0114 00:31:03.033338 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.033376 kubelet[2886]: W0114 00:31:03.033366 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.033511 kubelet[2886]: E0114 00:31:03.033391 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.033754 kubelet[2886]: E0114 00:31:03.033714 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.033754 kubelet[2886]: W0114 00:31:03.033732 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.033754 kubelet[2886]: E0114 00:31:03.033744 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.034708 kubelet[2886]: E0114 00:31:03.034662 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.034708 kubelet[2886]: W0114 00:31:03.034694 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.034844 kubelet[2886]: E0114 00:31:03.034806 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.035330 kubelet[2886]: E0114 00:31:03.035286 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.035388 kubelet[2886]: W0114 00:31:03.035305 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.035438 kubelet[2886]: E0114 00:31:03.035388 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.035908 kubelet[2886]: E0114 00:31:03.035865 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.035908 kubelet[2886]: W0114 00:31:03.035889 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.035908 kubelet[2886]: E0114 00:31:03.035902 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.036390 kubelet[2886]: E0114 00:31:03.036260 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.036390 kubelet[2886]: W0114 00:31:03.036383 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.036530 kubelet[2886]: E0114 00:31:03.036402 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.037914 kubelet[2886]: E0114 00:31:03.037875 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.037914 kubelet[2886]: W0114 00:31:03.037908 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.038086 kubelet[2886]: E0114 00:31:03.037931 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.039805 kubelet[2886]: E0114 00:31:03.039765 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.039805 kubelet[2886]: W0114 00:31:03.039794 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.039944 kubelet[2886]: E0114 00:31:03.039831 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.040169 kubelet[2886]: E0114 00:31:03.040144 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.040169 kubelet[2886]: W0114 00:31:03.040164 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.040249 kubelet[2886]: E0114 00:31:03.040176 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.040403 kubelet[2886]: E0114 00:31:03.040383 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.040403 kubelet[2886]: W0114 00:31:03.040398 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.040474 kubelet[2886]: E0114 00:31:03.040409 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.040644 kubelet[2886]: E0114 00:31:03.040604 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.040644 kubelet[2886]: W0114 00:31:03.040619 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.040644 kubelet[2886]: E0114 00:31:03.040627 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.041032 kubelet[2886]: E0114 00:31:03.041010 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.041032 kubelet[2886]: W0114 00:31:03.041025 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.041127 kubelet[2886]: E0114 00:31:03.041036 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.041634 kubelet[2886]: E0114 00:31:03.041213 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.041634 kubelet[2886]: W0114 00:31:03.041243 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.041634 kubelet[2886]: E0114 00:31:03.041252 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.041804 kubelet[2886]: E0114 00:31:03.041776 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.041804 kubelet[2886]: W0114 00:31:03.041789 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.041804 kubelet[2886]: E0114 00:31:03.041799 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.043638 kubelet[2886]: E0114 00:31:03.043604 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.043638 kubelet[2886]: W0114 00:31:03.043626 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.043638 kubelet[2886]: E0114 00:31:03.043642 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.044007 kubelet[2886]: E0114 00:31:03.043986 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.044061 kubelet[2886]: W0114 00:31:03.044013 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.044061 kubelet[2886]: E0114 00:31:03.044024 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.044258 kubelet[2886]: E0114 00:31:03.044237 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.044258 kubelet[2886]: W0114 00:31:03.044251 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.044368 kubelet[2886]: E0114 00:31:03.044261 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.767659 kubelet[2886]: E0114 00:31:03.766688 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:31:03.969835 kubelet[2886]: I0114 00:31:03.969624 2886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 00:31:03.997404 kubelet[2886]: E0114 00:31:03.997368 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.997404 kubelet[2886]: W0114 00:31:03.997395 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.998515 kubelet[2886]: E0114 00:31:03.997474 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.998515 kubelet[2886]: E0114 00:31:03.997869 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.998515 kubelet[2886]: W0114 00:31:03.997883 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.998515 kubelet[2886]: E0114 00:31:03.997896 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.998515 kubelet[2886]: E0114 00:31:03.998363 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.998515 kubelet[2886]: W0114 00:31:03.998380 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.998515 kubelet[2886]: E0114 00:31:03.998394 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.999060 kubelet[2886]: E0114 00:31:03.998693 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.999060 kubelet[2886]: W0114 00:31:03.998731 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.999060 kubelet[2886]: E0114 00:31:03.998745 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.999060 kubelet[2886]: E0114 00:31:03.999014 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.999060 kubelet[2886]: W0114 00:31:03.999024 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.999060 kubelet[2886]: E0114 00:31:03.999034 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.999389 kubelet[2886]: E0114 00:31:03.999150 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.999389 kubelet[2886]: W0114 00:31:03.999157 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.999389 kubelet[2886]: E0114 00:31:03.999278 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.999583 kubelet[2886]: E0114 00:31:03.999479 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.999583 kubelet[2886]: W0114 00:31:03.999493 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.999583 kubelet[2886]: E0114 00:31:03.999504 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:03.999744 kubelet[2886]: E0114 00:31:03.999699 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:03.999744 kubelet[2886]: W0114 00:31:03.999707 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:03.999744 kubelet[2886]: E0114 00:31:03.999717 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.000000 kubelet[2886]: E0114 00:31:03.999929 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.000000 kubelet[2886]: W0114 00:31:03.999940 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.000000 kubelet[2886]: E0114 00:31:03.999950 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.000201 kubelet[2886]: E0114 00:31:04.000177 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.000201 kubelet[2886]: W0114 00:31:04.000189 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.000335 kubelet[2886]: E0114 00:31:04.000199 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.000452 kubelet[2886]: E0114 00:31:04.000381 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.000452 kubelet[2886]: W0114 00:31:04.000389 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.000452 kubelet[2886]: E0114 00:31:04.000399 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.000614 kubelet[2886]: E0114 00:31:04.000563 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.000614 kubelet[2886]: W0114 00:31:04.000573 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.000614 kubelet[2886]: E0114 00:31:04.000583 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.000872 kubelet[2886]: E0114 00:31:04.000844 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.000872 kubelet[2886]: W0114 00:31:04.000854 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.000872 kubelet[2886]: E0114 00:31:04.000864 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.001031 kubelet[2886]: E0114 00:31:04.000983 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.001031 kubelet[2886]: W0114 00:31:04.000991 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.001031 kubelet[2886]: E0114 00:31:04.000998 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.001505 kubelet[2886]: E0114 00:31:04.001387 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.001557 kubelet[2886]: W0114 00:31:04.001504 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.001557 kubelet[2886]: E0114 00:31:04.001519 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.029334 containerd[1612]: time="2026-01-14T00:31:04.029247286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:31:04.030974 containerd[1612]: time="2026-01-14T00:31:04.030908951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:04.032156 containerd[1612]: time="2026-01-14T00:31:04.032099380Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:31:04.035081 containerd[1612]: time="2026-01-14T00:31:04.034985274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:31:04.036459 containerd[1612]: time="2026-01-14T00:31:04.036394781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.640390027s" Jan 14 00:31:04.036459 containerd[1612]: time="2026-01-14T00:31:04.036455381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 14 00:31:04.043438 kubelet[2886]: E0114 00:31:04.043405 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.043438 kubelet[2886]: W0114 00:31:04.043429 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.043438 kubelet[2886]: E0114 00:31:04.043450 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.044173 kubelet[2886]: E0114 00:31:04.044044 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.044173 kubelet[2886]: W0114 00:31:04.044158 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.044173 kubelet[2886]: E0114 00:31:04.044179 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.044867 containerd[1612]: time="2026-01-14T00:31:04.044682786Z" level=info msg="CreateContainer within sandbox \"464d20972f0a618f7dc4ba36bdb294268305c2dc362648806d1d7b6398316a8f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 00:31:04.045357 kubelet[2886]: E0114 00:31:04.045240 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.045357 kubelet[2886]: W0114 00:31:04.045290 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.045357 kubelet[2886]: E0114 00:31:04.045309 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.045912 kubelet[2886]: E0114 00:31:04.045594 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.045912 kubelet[2886]: W0114 00:31:04.045607 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.045912 kubelet[2886]: E0114 00:31:04.045621 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.046152 kubelet[2886]: E0114 00:31:04.046090 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.046152 kubelet[2886]: W0114 00:31:04.046113 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.046246 kubelet[2886]: E0114 00:31:04.046217 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.046922 kubelet[2886]: E0114 00:31:04.046861 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.047331 kubelet[2886]: W0114 00:31:04.046967 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.047331 kubelet[2886]: E0114 00:31:04.046983 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.047331 kubelet[2886]: E0114 00:31:04.047226 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.047331 kubelet[2886]: W0114 00:31:04.047236 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.047331 kubelet[2886]: E0114 00:31:04.047247 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.047516 kubelet[2886]: E0114 00:31:04.047480 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.047516 kubelet[2886]: W0114 00:31:04.047490 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.047516 kubelet[2886]: E0114 00:31:04.047500 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.047767 kubelet[2886]: E0114 00:31:04.047735 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.047767 kubelet[2886]: W0114 00:31:04.047749 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.047767 kubelet[2886]: E0114 00:31:04.047761 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.048832 kubelet[2886]: E0114 00:31:04.047979 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.048832 kubelet[2886]: W0114 00:31:04.048001 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.048832 kubelet[2886]: E0114 00:31:04.048010 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.048832 kubelet[2886]: E0114 00:31:04.048206 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.048832 kubelet[2886]: W0114 00:31:04.048218 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.048832 kubelet[2886]: E0114 00:31:04.048237 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.048832 kubelet[2886]: E0114 00:31:04.048724 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.048832 kubelet[2886]: W0114 00:31:04.048737 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.048832 kubelet[2886]: E0114 00:31:04.048754 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.049681 kubelet[2886]: E0114 00:31:04.049553 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.049681 kubelet[2886]: W0114 00:31:04.049570 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.049681 kubelet[2886]: E0114 00:31:04.049587 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.050095 kubelet[2886]: E0114 00:31:04.049914 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.050095 kubelet[2886]: W0114 00:31:04.049932 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.050095 kubelet[2886]: E0114 00:31:04.049945 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.050363 kubelet[2886]: E0114 00:31:04.050267 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.050363 kubelet[2886]: W0114 00:31:04.050299 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.050363 kubelet[2886]: E0114 00:31:04.050314 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.051859 kubelet[2886]: E0114 00:31:04.051056 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.052104 kubelet[2886]: W0114 00:31:04.051957 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.052104 kubelet[2886]: E0114 00:31:04.051988 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.052397 kubelet[2886]: E0114 00:31:04.052378 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.052544 kubelet[2886]: W0114 00:31:04.052476 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.052544 kubelet[2886]: E0114 00:31:04.052496 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.052949 kubelet[2886]: E0114 00:31:04.052892 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:31:04.052949 kubelet[2886]: W0114 00:31:04.052914 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:31:04.052949 kubelet[2886]: E0114 00:31:04.052925 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:31:04.062441 containerd[1612]: time="2026-01-14T00:31:04.062331266Z" level=info msg="Container e97081dc952ef6027573ae68b8699ffcedb1926a9e2548b9ea527a0a6bee2444: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:31:04.075831 containerd[1612]: time="2026-01-14T00:31:04.075709705Z" level=info msg="CreateContainer within sandbox \"464d20972f0a618f7dc4ba36bdb294268305c2dc362648806d1d7b6398316a8f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e97081dc952ef6027573ae68b8699ffcedb1926a9e2548b9ea527a0a6bee2444\"" Jan 14 00:31:04.077990 containerd[1612]: time="2026-01-14T00:31:04.077927924Z" level=info msg="StartContainer for \"e97081dc952ef6027573ae68b8699ffcedb1926a9e2548b9ea527a0a6bee2444\"" Jan 14 00:31:04.081458 containerd[1612]: time="2026-01-14T00:31:04.081398773Z" level=info msg="connecting to shim e97081dc952ef6027573ae68b8699ffcedb1926a9e2548b9ea527a0a6bee2444" address="unix:///run/containerd/s/70a003d5b456dc88b9f6e675d322c26dc2a5c24c7e5e0ae30460510e0c9fdbe1" protocol=ttrpc version=3 Jan 14 00:31:04.106335 systemd[1]: Started cri-containerd-e97081dc952ef6027573ae68b8699ffcedb1926a9e2548b9ea527a0a6bee2444.scope - libcontainer container e97081dc952ef6027573ae68b8699ffcedb1926a9e2548b9ea527a0a6bee2444. Jan 14 00:31:04.160000 audit: BPF prog-id=164 op=LOAD Jan 14 00:31:04.160000 audit[3671]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=3483 pid=3671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:04.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539373038316463393532656636303237353733616536386238363939 Jan 14 00:31:04.160000 audit: BPF prog-id=165 op=LOAD Jan 14 00:31:04.160000 audit[3671]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=3483 pid=3671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:04.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539373038316463393532656636303237353733616536386238363939 Jan 14 00:31:04.161000 audit: BPF prog-id=165 op=UNLOAD Jan 14 00:31:04.161000 audit[3671]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3483 pid=3671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:04.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539373038316463393532656636303237353733616536386238363939 Jan 14 00:31:04.161000 audit: BPF prog-id=164 op=UNLOAD Jan 14 00:31:04.161000 audit[3671]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3483 pid=3671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:04.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539373038316463393532656636303237353733616536386238363939 Jan 14 00:31:04.161000 audit: BPF prog-id=166 op=LOAD Jan 14 00:31:04.161000 audit[3671]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=3483 pid=3671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:04.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539373038316463393532656636303237353733616536386238363939 Jan 14 00:31:04.190272 containerd[1612]: time="2026-01-14T00:31:04.190184946Z" level=info msg="StartContainer for \"e97081dc952ef6027573ae68b8699ffcedb1926a9e2548b9ea527a0a6bee2444\" returns successfully" Jan 14 00:31:04.207600 systemd[1]: cri-containerd-e97081dc952ef6027573ae68b8699ffcedb1926a9e2548b9ea527a0a6bee2444.scope: Deactivated successfully. Jan 14 00:31:04.210000 audit: BPF prog-id=166 op=UNLOAD Jan 14 00:31:04.215941 containerd[1612]: time="2026-01-14T00:31:04.215897993Z" level=info msg="received container exit event container_id:\"e97081dc952ef6027573ae68b8699ffcedb1926a9e2548b9ea527a0a6bee2444\" id:\"e97081dc952ef6027573ae68b8699ffcedb1926a9e2548b9ea527a0a6bee2444\" pid:3685 exited_at:{seconds:1768350664 nanos:215436797}" Jan 14 00:31:04.248672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e97081dc952ef6027573ae68b8699ffcedb1926a9e2548b9ea527a0a6bee2444-rootfs.mount: Deactivated successfully. Jan 14 00:31:04.982883 containerd[1612]: time="2026-01-14T00:31:04.981536969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 00:31:05.033084 kubelet[2886]: I0114 00:31:05.032929 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-585f87649-b5vxz" podStartSLOduration=3.410120662 podStartE2EDuration="6.032877072s" podCreationTimestamp="2026-01-14 00:30:59 +0000 UTC" firstStartedPulling="2026-01-14 00:30:59.771875291 +0000 UTC m=+30.167971126" lastFinishedPulling="2026-01-14 00:31:02.394631701 +0000 UTC m=+32.790727536" observedRunningTime="2026-01-14 00:31:03.011012129 +0000 UTC m=+33.407107964" watchObservedRunningTime="2026-01-14 00:31:05.032877072 +0000 UTC m=+35.428972947" Jan 14 00:31:05.770841 kubelet[2886]: E0114 00:31:05.770734 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:31:05.836723 sshd[3561]: Connection closed by authenticating user root 5.187.35.21 port 29248 [preauth] Jan 14 00:31:05.840311 kernel: kauditd_printk_skb: 59 callbacks suppressed Jan 14 00:31:05.840462 kernel: audit: type=1109 audit(1768350665.833:620): pid=3561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:31:05.833000 audit[3561]: USER_ERR pid=3561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:31:05.847275 systemd[1]: sshd@27-91.99.0.249:22-5.187.35.21:29248.service: Deactivated successfully. Jan 14 00:31:05.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-91.99.0.249:22-5.187.35.21:29248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:05.852251 kernel: audit: type=1131 audit(1768350665.848:621): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-91.99.0.249:22-5.187.35.21:29248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:05.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-91.99.0.249:22-5.187.35.21:29266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:05.878084 systemd[1]: Started sshd@28-91.99.0.249:22-5.187.35.21:29266.service - OpenSSH per-connection server daemon (5.187.35.21:29266). Jan 14 00:31:05.881839 kernel: audit: type=1130 audit(1768350665.877:622): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-91.99.0.249:22-5.187.35.21:29266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:07.771237 kubelet[2886]: E0114 00:31:07.771136 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:31:08.742001 containerd[1612]: time="2026-01-14T00:31:08.741902698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:31:08.744847 containerd[1612]: time="2026-01-14T00:31:08.744701236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 14 00:31:08.746851 containerd[1612]: time="2026-01-14T00:31:08.746672300Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:31:08.750177 containerd[1612]: time="2026-01-14T00:31:08.750090512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:31:08.752837 containerd[1612]: time="2026-01-14T00:31:08.751317902Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.767428994s" Jan 14 00:31:08.752837 containerd[1612]: time="2026-01-14T00:31:08.751390581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 14 00:31:08.761448 containerd[1612]: time="2026-01-14T00:31:08.761391500Z" level=info msg="CreateContainer within sandbox \"464d20972f0a618f7dc4ba36bdb294268305c2dc362648806d1d7b6398316a8f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 00:31:08.776921 containerd[1612]: time="2026-01-14T00:31:08.773429723Z" level=info msg="Container f6e53b0aa79bc10eccd8b7eb66c2dcaf475e5695f583387be12c35bc4f9bf964: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:31:08.787018 containerd[1612]: time="2026-01-14T00:31:08.786798494Z" level=info msg="CreateContainer within sandbox \"464d20972f0a618f7dc4ba36bdb294268305c2dc362648806d1d7b6398316a8f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f6e53b0aa79bc10eccd8b7eb66c2dcaf475e5695f583387be12c35bc4f9bf964\"" Jan 14 00:31:08.787675 containerd[1612]: time="2026-01-14T00:31:08.787634368Z" level=info msg="StartContainer for \"f6e53b0aa79bc10eccd8b7eb66c2dcaf475e5695f583387be12c35bc4f9bf964\"" Jan 14 00:31:08.791081 containerd[1612]: time="2026-01-14T00:31:08.791032180Z" level=info msg="connecting to shim f6e53b0aa79bc10eccd8b7eb66c2dcaf475e5695f583387be12c35bc4f9bf964" address="unix:///run/containerd/s/70a003d5b456dc88b9f6e675d322c26dc2a5c24c7e5e0ae30460510e0c9fdbe1" protocol=ttrpc version=3 Jan 14 00:31:08.826400 systemd[1]: Started cri-containerd-f6e53b0aa79bc10eccd8b7eb66c2dcaf475e5695f583387be12c35bc4f9bf964.scope - libcontainer container f6e53b0aa79bc10eccd8b7eb66c2dcaf475e5695f583387be12c35bc4f9bf964. Jan 14 00:31:08.894000 audit: BPF prog-id=167 op=LOAD Jan 14 00:31:08.894000 audit[3738]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3483 pid=3738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:08.899178 kernel: audit: type=1334 audit(1768350668.894:623): prog-id=167 op=LOAD Jan 14 00:31:08.899290 kernel: audit: type=1300 audit(1768350668.894:623): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3483 pid=3738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:08.899351 kernel: audit: type=1327 audit(1768350668.894:623): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636653533623061613739626331306563636438623765623636633264 Jan 14 00:31:08.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636653533623061613739626331306563636438623765623636633264 Jan 14 00:31:08.894000 audit: BPF prog-id=168 op=LOAD Jan 14 00:31:08.903560 kernel: audit: type=1334 audit(1768350668.894:624): prog-id=168 op=LOAD Jan 14 00:31:08.894000 audit[3738]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3483 pid=3738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:08.906537 kernel: audit: type=1300 audit(1768350668.894:624): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3483 pid=3738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:08.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636653533623061613739626331306563636438623765623636633264 Jan 14 00:31:08.909951 kernel: audit: type=1327 audit(1768350668.894:624): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636653533623061613739626331306563636438623765623636633264 Jan 14 00:31:08.898000 audit: BPF prog-id=168 op=UNLOAD Jan 14 00:31:08.911875 kernel: audit: type=1334 audit(1768350668.898:625): prog-id=168 op=UNLOAD Jan 14 00:31:08.898000 audit[3738]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3483 pid=3738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:08.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636653533623061613739626331306563636438623765623636633264 Jan 14 00:31:08.898000 audit: BPF prog-id=167 op=UNLOAD Jan 14 00:31:08.898000 audit[3738]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3483 pid=3738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:08.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636653533623061613739626331306563636438623765623636633264 Jan 14 00:31:08.898000 audit: BPF prog-id=169 op=LOAD Jan 14 00:31:08.898000 audit[3738]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3483 pid=3738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:08.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636653533623061613739626331306563636438623765623636633264 Jan 14 00:31:08.952089 containerd[1612]: time="2026-01-14T00:31:08.951943435Z" level=info msg="StartContainer for \"f6e53b0aa79bc10eccd8b7eb66c2dcaf475e5695f583387be12c35bc4f9bf964\" returns successfully" Jan 14 00:31:09.374795 sshd[3725]: Connection closed by authenticating user root 5.187.35.21 port 29266 [preauth] Jan 14 00:31:09.375000 audit[3725]: USER_ERR pid=3725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:31:09.381327 systemd[1]: sshd@28-91.99.0.249:22-5.187.35.21:29266.service: Deactivated successfully. Jan 14 00:31:09.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-91.99.0.249:22-5.187.35.21:29266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:09.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-91.99.0.249:22-5.187.35.21:29280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:09.403081 systemd[1]: Started sshd@29-91.99.0.249:22-5.187.35.21:29280.service - OpenSSH per-connection server daemon (5.187.35.21:29280). Jan 14 00:31:09.759539 containerd[1612]: time="2026-01-14T00:31:09.759201979Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 00:31:09.762938 systemd[1]: cri-containerd-f6e53b0aa79bc10eccd8b7eb66c2dcaf475e5695f583387be12c35bc4f9bf964.scope: Deactivated successfully. Jan 14 00:31:09.763371 systemd[1]: cri-containerd-f6e53b0aa79bc10eccd8b7eb66c2dcaf475e5695f583387be12c35bc4f9bf964.scope: Consumed 621ms CPU time, 188M memory peak, 165.9M written to disk. Jan 14 00:31:09.767000 audit: BPF prog-id=169 op=UNLOAD Jan 14 00:31:09.768303 kubelet[2886]: E0114 00:31:09.767277 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:31:09.770292 containerd[1612]: time="2026-01-14T00:31:09.769888534Z" level=info msg="received container exit event container_id:\"f6e53b0aa79bc10eccd8b7eb66c2dcaf475e5695f583387be12c35bc4f9bf964\" id:\"f6e53b0aa79bc10eccd8b7eb66c2dcaf475e5695f583387be12c35bc4f9bf964\" pid:3753 exited_at:{seconds:1768350669 nanos:769379738}" Jan 14 00:31:09.775311 kubelet[2886]: I0114 00:31:09.775253 2886 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 00:31:09.818675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6e53b0aa79bc10eccd8b7eb66c2dcaf475e5695f583387be12c35bc4f9bf964-rootfs.mount: Deactivated successfully. Jan 14 00:31:09.944250 systemd[1]: Created slice kubepods-besteffort-pode13d0bbb_aebc_452d_9c9e_69de55abbb87.slice - libcontainer container kubepods-besteffort-pode13d0bbb_aebc_452d_9c9e_69de55abbb87.slice. Jan 14 00:31:09.976624 systemd[1]: Created slice kubepods-besteffort-pod900e30be_5423_4b2c_9623_a51920c0a748.slice - libcontainer container kubepods-besteffort-pod900e30be_5423_4b2c_9623_a51920c0a748.slice. Jan 14 00:31:09.997228 systemd[1]: Created slice kubepods-besteffort-pode1fe6e4c_0c0d_49c4_b91f_2f3917a3c39a.slice - libcontainer container kubepods-besteffort-pode1fe6e4c_0c0d_49c4_b91f_2f3917a3c39a.slice. Jan 14 00:31:10.003102 kubelet[2886]: I0114 00:31:10.003052 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgqzs\" (UniqueName: \"kubernetes.io/projected/e13d0bbb-aebc-452d-9c9e-69de55abbb87-kube-api-access-qgqzs\") pod \"whisker-646545d69-xpxrn\" (UID: \"e13d0bbb-aebc-452d-9c9e-69de55abbb87\") " pod="calico-system/whisker-646545d69-xpxrn" Jan 14 00:31:10.003102 kubelet[2886]: I0114 00:31:10.003099 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5rvz\" (UniqueName: \"kubernetes.io/projected/e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a-kube-api-access-j5rvz\") pod \"calico-apiserver-6b56c4d86d-wv9l2\" (UID: \"e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a\") " pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" Jan 14 00:31:10.003386 kubelet[2886]: I0114 00:31:10.003185 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/900e30be-5423-4b2c-9623-a51920c0a748-calico-apiserver-certs\") pod \"calico-apiserver-6b56c4d86d-jdxwl\" (UID: \"900e30be-5423-4b2c-9623-a51920c0a748\") " pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" Jan 14 00:31:10.003386 kubelet[2886]: I0114 00:31:10.003211 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e13d0bbb-aebc-452d-9c9e-69de55abbb87-whisker-backend-key-pair\") pod \"whisker-646545d69-xpxrn\" (UID: \"e13d0bbb-aebc-452d-9c9e-69de55abbb87\") " pod="calico-system/whisker-646545d69-xpxrn" Jan 14 00:31:10.003386 kubelet[2886]: I0114 00:31:10.003228 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a-calico-apiserver-certs\") pod \"calico-apiserver-6b56c4d86d-wv9l2\" (UID: \"e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a\") " pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" Jan 14 00:31:10.003386 kubelet[2886]: I0114 00:31:10.003253 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvhph\" (UniqueName: \"kubernetes.io/projected/900e30be-5423-4b2c-9623-a51920c0a748-kube-api-access-tvhph\") pod \"calico-apiserver-6b56c4d86d-jdxwl\" (UID: \"900e30be-5423-4b2c-9623-a51920c0a748\") " pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" Jan 14 00:31:10.003386 kubelet[2886]: I0114 00:31:10.003272 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e13d0bbb-aebc-452d-9c9e-69de55abbb87-whisker-ca-bundle\") pod \"whisker-646545d69-xpxrn\" (UID: \"e13d0bbb-aebc-452d-9c9e-69de55abbb87\") " pod="calico-system/whisker-646545d69-xpxrn" Jan 14 00:31:10.028763 systemd[1]: Created slice kubepods-burstable-pod4dd4a077_4753_4782_8d25_a0a09436f34f.slice - libcontainer container kubepods-burstable-pod4dd4a077_4753_4782_8d25_a0a09436f34f.slice. Jan 14 00:31:10.042223 systemd[1]: Created slice kubepods-besteffort-pod67055487_2b15_4e2a_8975_7fee787b4309.slice - libcontainer container kubepods-besteffort-pod67055487_2b15_4e2a_8975_7fee787b4309.slice. Jan 14 00:31:10.055333 systemd[1]: Created slice kubepods-besteffort-podc604fd3c_83a2_496b_a7f9_8f8a03e00409.slice - libcontainer container kubepods-besteffort-podc604fd3c_83a2_496b_a7f9_8f8a03e00409.slice. Jan 14 00:31:10.069975 containerd[1612]: time="2026-01-14T00:31:10.069684425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 00:31:10.075289 systemd[1]: Created slice kubepods-burstable-pod0d97a284_7542_4401_aed1_52eb725b1c6d.slice - libcontainer container kubepods-burstable-pod0d97a284_7542_4401_aed1_52eb725b1c6d.slice. Jan 14 00:31:10.104415 kubelet[2886]: I0114 00:31:10.103993 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dd4a077-4753-4782-8d25-a0a09436f34f-config-volume\") pod \"coredns-674b8bbfcf-jkl7t\" (UID: \"4dd4a077-4753-4782-8d25-a0a09436f34f\") " pod="kube-system/coredns-674b8bbfcf-jkl7t" Jan 14 00:31:10.104585 kubelet[2886]: I0114 00:31:10.104432 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67055487-2b15-4e2a-8975-7fee787b4309-goldmane-ca-bundle\") pod \"goldmane-666569f655-nsbsf\" (UID: \"67055487-2b15-4e2a-8975-7fee787b4309\") " pod="calico-system/goldmane-666569f655-nsbsf" Jan 14 00:31:10.104635 kubelet[2886]: I0114 00:31:10.104596 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48rw9\" (UniqueName: \"kubernetes.io/projected/4dd4a077-4753-4782-8d25-a0a09436f34f-kube-api-access-48rw9\") pod \"coredns-674b8bbfcf-jkl7t\" (UID: \"4dd4a077-4753-4782-8d25-a0a09436f34f\") " pod="kube-system/coredns-674b8bbfcf-jkl7t" Jan 14 00:31:10.104635 kubelet[2886]: I0114 00:31:10.104618 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/67055487-2b15-4e2a-8975-7fee787b4309-goldmane-key-pair\") pod \"goldmane-666569f655-nsbsf\" (UID: \"67055487-2b15-4e2a-8975-7fee787b4309\") " pod="calico-system/goldmane-666569f655-nsbsf" Jan 14 00:31:10.104685 kubelet[2886]: I0114 00:31:10.104640 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67055487-2b15-4e2a-8975-7fee787b4309-config\") pod \"goldmane-666569f655-nsbsf\" (UID: \"67055487-2b15-4e2a-8975-7fee787b4309\") " pod="calico-system/goldmane-666569f655-nsbsf" Jan 14 00:31:10.104685 kubelet[2886]: I0114 00:31:10.104659 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgztm\" (UniqueName: \"kubernetes.io/projected/67055487-2b15-4e2a-8975-7fee787b4309-kube-api-access-pgztm\") pod \"goldmane-666569f655-nsbsf\" (UID: \"67055487-2b15-4e2a-8975-7fee787b4309\") " pod="calico-system/goldmane-666569f655-nsbsf" Jan 14 00:31:10.104738 kubelet[2886]: I0114 00:31:10.104693 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d97a284-7542-4401-aed1-52eb725b1c6d-config-volume\") pod \"coredns-674b8bbfcf-zw96t\" (UID: \"0d97a284-7542-4401-aed1-52eb725b1c6d\") " pod="kube-system/coredns-674b8bbfcf-zw96t" Jan 14 00:31:10.104738 kubelet[2886]: I0114 00:31:10.104715 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-425qr\" (UniqueName: \"kubernetes.io/projected/0d97a284-7542-4401-aed1-52eb725b1c6d-kube-api-access-425qr\") pod \"coredns-674b8bbfcf-zw96t\" (UID: \"0d97a284-7542-4401-aed1-52eb725b1c6d\") " pod="kube-system/coredns-674b8bbfcf-zw96t" Jan 14 00:31:10.109417 kubelet[2886]: I0114 00:31:10.109345 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsckj\" (UniqueName: \"kubernetes.io/projected/c604fd3c-83a2-496b-a7f9-8f8a03e00409-kube-api-access-xsckj\") pod \"calico-kube-controllers-5cbc4f559-f6xkt\" (UID: \"c604fd3c-83a2-496b-a7f9-8f8a03e00409\") " pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" Jan 14 00:31:10.109703 kubelet[2886]: I0114 00:31:10.109683 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c604fd3c-83a2-496b-a7f9-8f8a03e00409-tigera-ca-bundle\") pod \"calico-kube-controllers-5cbc4f559-f6xkt\" (UID: \"c604fd3c-83a2-496b-a7f9-8f8a03e00409\") " pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" Jan 14 00:31:10.271482 containerd[1612]: time="2026-01-14T00:31:10.271404198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-646545d69-xpxrn,Uid:e13d0bbb-aebc-452d-9c9e-69de55abbb87,Namespace:calico-system,Attempt:0,}" Jan 14 00:31:10.291187 containerd[1612]: time="2026-01-14T00:31:10.290608650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b56c4d86d-jdxwl,Uid:900e30be-5423-4b2c-9623-a51920c0a748,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:31:10.316610 containerd[1612]: time="2026-01-14T00:31:10.316297613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b56c4d86d-wv9l2,Uid:e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:31:10.336109 containerd[1612]: time="2026-01-14T00:31:10.335992702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkl7t,Uid:4dd4a077-4753-4782-8d25-a0a09436f34f,Namespace:kube-system,Attempt:0,}" Jan 14 00:31:10.360270 containerd[1612]: time="2026-01-14T00:31:10.360197396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nsbsf,Uid:67055487-2b15-4e2a-8975-7fee787b4309,Namespace:calico-system,Attempt:0,}" Jan 14 00:31:10.379177 containerd[1612]: time="2026-01-14T00:31:10.378886293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbc4f559-f6xkt,Uid:c604fd3c-83a2-496b-a7f9-8f8a03e00409,Namespace:calico-system,Attempt:0,}" Jan 14 00:31:10.384086 containerd[1612]: time="2026-01-14T00:31:10.384033814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zw96t,Uid:0d97a284-7542-4401-aed1-52eb725b1c6d,Namespace:kube-system,Attempt:0,}" Jan 14 00:31:10.524017 containerd[1612]: time="2026-01-14T00:31:10.523799541Z" level=error msg="Failed to destroy network for sandbox \"6870d6dded088c3058dad9fd8678b7accd11c90d4f9f4269d714d309492efc84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.535264 containerd[1612]: time="2026-01-14T00:31:10.534789497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b56c4d86d-wv9l2,Uid:e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6870d6dded088c3058dad9fd8678b7accd11c90d4f9f4269d714d309492efc84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.535707 kubelet[2886]: E0114 00:31:10.535662 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6870d6dded088c3058dad9fd8678b7accd11c90d4f9f4269d714d309492efc84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.535800 kubelet[2886]: E0114 00:31:10.535733 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6870d6dded088c3058dad9fd8678b7accd11c90d4f9f4269d714d309492efc84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" Jan 14 00:31:10.535800 kubelet[2886]: E0114 00:31:10.535759 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6870d6dded088c3058dad9fd8678b7accd11c90d4f9f4269d714d309492efc84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" Jan 14 00:31:10.536749 kubelet[2886]: E0114 00:31:10.535930 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b56c4d86d-wv9l2_calico-apiserver(e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b56c4d86d-wv9l2_calico-apiserver(e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6870d6dded088c3058dad9fd8678b7accd11c90d4f9f4269d714d309492efc84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" podUID="e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a" Jan 14 00:31:10.545833 containerd[1612]: time="2026-01-14T00:31:10.545659014Z" level=error msg="Failed to destroy network for sandbox \"2ca03f9125da87f2ac37a5e2f350f8fdb2a3432cd3a89e6b4914c72192e97537\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.558420 containerd[1612]: time="2026-01-14T00:31:10.558353116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-646545d69-xpxrn,Uid:e13d0bbb-aebc-452d-9c9e-69de55abbb87,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ca03f9125da87f2ac37a5e2f350f8fdb2a3432cd3a89e6b4914c72192e97537\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.558806 kubelet[2886]: E0114 00:31:10.558761 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ca03f9125da87f2ac37a5e2f350f8fdb2a3432cd3a89e6b4914c72192e97537\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.559740 kubelet[2886]: E0114 00:31:10.559073 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ca03f9125da87f2ac37a5e2f350f8fdb2a3432cd3a89e6b4914c72192e97537\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-646545d69-xpxrn" Jan 14 00:31:10.559740 kubelet[2886]: E0114 00:31:10.559106 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ca03f9125da87f2ac37a5e2f350f8fdb2a3432cd3a89e6b4914c72192e97537\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-646545d69-xpxrn" Jan 14 00:31:10.559740 kubelet[2886]: E0114 00:31:10.559186 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-646545d69-xpxrn_calico-system(e13d0bbb-aebc-452d-9c9e-69de55abbb87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-646545d69-xpxrn_calico-system(e13d0bbb-aebc-452d-9c9e-69de55abbb87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ca03f9125da87f2ac37a5e2f350f8fdb2a3432cd3a89e6b4914c72192e97537\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-646545d69-xpxrn" podUID="e13d0bbb-aebc-452d-9c9e-69de55abbb87" Jan 14 00:31:10.577530 containerd[1612]: time="2026-01-14T00:31:10.577468490Z" level=error msg="Failed to destroy network for sandbox \"ee16cc10c474cf81b138f06b3714e6397568620e11dc2bab0b5c8b0b2f7c1620\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.586183 containerd[1612]: time="2026-01-14T00:31:10.585842066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b56c4d86d-jdxwl,Uid:900e30be-5423-4b2c-9623-a51920c0a748,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee16cc10c474cf81b138f06b3714e6397568620e11dc2bab0b5c8b0b2f7c1620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.586385 kubelet[2886]: E0114 00:31:10.586209 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee16cc10c474cf81b138f06b3714e6397568620e11dc2bab0b5c8b0b2f7c1620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.586385 kubelet[2886]: E0114 00:31:10.586285 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee16cc10c474cf81b138f06b3714e6397568620e11dc2bab0b5c8b0b2f7c1620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" Jan 14 00:31:10.586385 kubelet[2886]: E0114 00:31:10.586311 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee16cc10c474cf81b138f06b3714e6397568620e11dc2bab0b5c8b0b2f7c1620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" Jan 14 00:31:10.586475 kubelet[2886]: E0114 00:31:10.586370 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b56c4d86d-jdxwl_calico-apiserver(900e30be-5423-4b2c-9623-a51920c0a748)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b56c4d86d-jdxwl_calico-apiserver(900e30be-5423-4b2c-9623-a51920c0a748)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee16cc10c474cf81b138f06b3714e6397568620e11dc2bab0b5c8b0b2f7c1620\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" podUID="900e30be-5423-4b2c-9623-a51920c0a748" Jan 14 00:31:10.599374 containerd[1612]: time="2026-01-14T00:31:10.599259483Z" level=error msg="Failed to destroy network for sandbox \"62cd421efcf10f9133e31eb8cb0358c3dfc810eb201e9da85c89dedef9b6c402\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.604972 containerd[1612]: time="2026-01-14T00:31:10.604897519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbc4f559-f6xkt,Uid:c604fd3c-83a2-496b-a7f9-8f8a03e00409,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62cd421efcf10f9133e31eb8cb0358c3dfc810eb201e9da85c89dedef9b6c402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.605533 kubelet[2886]: E0114 00:31:10.605441 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62cd421efcf10f9133e31eb8cb0358c3dfc810eb201e9da85c89dedef9b6c402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.605533 kubelet[2886]: E0114 00:31:10.605507 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62cd421efcf10f9133e31eb8cb0358c3dfc810eb201e9da85c89dedef9b6c402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" Jan 14 00:31:10.605533 kubelet[2886]: E0114 00:31:10.605527 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62cd421efcf10f9133e31eb8cb0358c3dfc810eb201e9da85c89dedef9b6c402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" Jan 14 00:31:10.605670 kubelet[2886]: E0114 00:31:10.605583 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cbc4f559-f6xkt_calico-system(c604fd3c-83a2-496b-a7f9-8f8a03e00409)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cbc4f559-f6xkt_calico-system(c604fd3c-83a2-496b-a7f9-8f8a03e00409)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62cd421efcf10f9133e31eb8cb0358c3dfc810eb201e9da85c89dedef9b6c402\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" podUID="c604fd3c-83a2-496b-a7f9-8f8a03e00409" Jan 14 00:31:10.607001 containerd[1612]: time="2026-01-14T00:31:10.606945944Z" level=error msg="Failed to destroy network for sandbox \"513d6a1c9387fee092db715a39d942de55d595b88e8ef9750d52597971014d5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.611840 containerd[1612]: time="2026-01-14T00:31:10.611738027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkl7t,Uid:4dd4a077-4753-4782-8d25-a0a09436f34f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"513d6a1c9387fee092db715a39d942de55d595b88e8ef9750d52597971014d5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.612157 kubelet[2886]: E0114 00:31:10.612015 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"513d6a1c9387fee092db715a39d942de55d595b88e8ef9750d52597971014d5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.612157 kubelet[2886]: E0114 00:31:10.612094 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"513d6a1c9387fee092db715a39d942de55d595b88e8ef9750d52597971014d5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jkl7t" Jan 14 00:31:10.612157 kubelet[2886]: E0114 00:31:10.612133 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"513d6a1c9387fee092db715a39d942de55d595b88e8ef9750d52597971014d5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jkl7t" Jan 14 00:31:10.613716 kubelet[2886]: E0114 00:31:10.612912 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jkl7t_kube-system(4dd4a077-4753-4782-8d25-a0a09436f34f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jkl7t_kube-system(4dd4a077-4753-4782-8d25-a0a09436f34f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"513d6a1c9387fee092db715a39d942de55d595b88e8ef9750d52597971014d5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jkl7t" podUID="4dd4a077-4753-4782-8d25-a0a09436f34f" Jan 14 00:31:10.624192 containerd[1612]: time="2026-01-14T00:31:10.623994013Z" level=error msg="Failed to destroy network for sandbox \"3c5cbc235903f83639595c21bd1805dc374310bc399f76fb46e587bc643fc053\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.631214 containerd[1612]: time="2026-01-14T00:31:10.631145238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zw96t,Uid:0d97a284-7542-4401-aed1-52eb725b1c6d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c5cbc235903f83639595c21bd1805dc374310bc399f76fb46e587bc643fc053\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.631398 containerd[1612]: time="2026-01-14T00:31:10.631365516Z" level=error msg="Failed to destroy network for sandbox \"ce84c44dd47d8e22218fff88ad77116cf5bba6187c137b60ef2e0c5071759772\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.631705 kubelet[2886]: E0114 00:31:10.631646 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c5cbc235903f83639595c21bd1805dc374310bc399f76fb46e587bc643fc053\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.631780 kubelet[2886]: E0114 00:31:10.631712 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c5cbc235903f83639595c21bd1805dc374310bc399f76fb46e587bc643fc053\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zw96t" Jan 14 00:31:10.631780 kubelet[2886]: E0114 00:31:10.631734 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c5cbc235903f83639595c21bd1805dc374310bc399f76fb46e587bc643fc053\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zw96t" Jan 14 00:31:10.631974 kubelet[2886]: E0114 00:31:10.631789 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zw96t_kube-system(0d97a284-7542-4401-aed1-52eb725b1c6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zw96t_kube-system(0d97a284-7542-4401-aed1-52eb725b1c6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c5cbc235903f83639595c21bd1805dc374310bc399f76fb46e587bc643fc053\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zw96t" podUID="0d97a284-7542-4401-aed1-52eb725b1c6d" Jan 14 00:31:10.636946 containerd[1612]: time="2026-01-14T00:31:10.636870434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nsbsf,Uid:67055487-2b15-4e2a-8975-7fee787b4309,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce84c44dd47d8e22218fff88ad77116cf5bba6187c137b60ef2e0c5071759772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.637350 kubelet[2886]: E0114 00:31:10.637166 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce84c44dd47d8e22218fff88ad77116cf5bba6187c137b60ef2e0c5071759772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:10.637350 kubelet[2886]: E0114 00:31:10.637227 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce84c44dd47d8e22218fff88ad77116cf5bba6187c137b60ef2e0c5071759772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nsbsf" Jan 14 00:31:10.637350 kubelet[2886]: E0114 00:31:10.637247 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce84c44dd47d8e22218fff88ad77116cf5bba6187c137b60ef2e0c5071759772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nsbsf" Jan 14 00:31:10.637479 kubelet[2886]: E0114 00:31:10.637413 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-nsbsf_calico-system(67055487-2b15-4e2a-8975-7fee787b4309)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-nsbsf_calico-system(67055487-2b15-4e2a-8975-7fee787b4309)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce84c44dd47d8e22218fff88ad77116cf5bba6187c137b60ef2e0c5071759772\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nsbsf" podUID="67055487-2b15-4e2a-8975-7fee787b4309" Jan 14 00:31:11.778258 systemd[1]: Created slice kubepods-besteffort-podc4ec9a31_66c9_4bf7_a831_6c170af7211c.slice - libcontainer container kubepods-besteffort-podc4ec9a31_66c9_4bf7_a831_6c170af7211c.slice. Jan 14 00:31:11.786224 containerd[1612]: time="2026-01-14T00:31:11.786128502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8rkfb,Uid:c4ec9a31-66c9-4bf7-a831-6c170af7211c,Namespace:calico-system,Attempt:0,}" Jan 14 00:31:11.879922 containerd[1612]: time="2026-01-14T00:31:11.879172327Z" level=error msg="Failed to destroy network for sandbox \"5eb4ec6fd7a11503353f1d5ea78b1bd3700e62a1ba423d4b1ae594ec91b5d843\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:11.881311 systemd[1]: run-netns-cni\x2d51e9bf36\x2d3326\x2d5130\x2d8241\x2d990c827d3551.mount: Deactivated successfully. Jan 14 00:31:11.888415 containerd[1612]: time="2026-01-14T00:31:11.888349659Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8rkfb,Uid:c4ec9a31-66c9-4bf7-a831-6c170af7211c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb4ec6fd7a11503353f1d5ea78b1bd3700e62a1ba423d4b1ae594ec91b5d843\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:11.889125 kubelet[2886]: E0114 00:31:11.888962 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb4ec6fd7a11503353f1d5ea78b1bd3700e62a1ba423d4b1ae594ec91b5d843\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:31:11.889511 kubelet[2886]: E0114 00:31:11.889167 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb4ec6fd7a11503353f1d5ea78b1bd3700e62a1ba423d4b1ae594ec91b5d843\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8rkfb" Jan 14 00:31:11.889511 kubelet[2886]: E0114 00:31:11.889205 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb4ec6fd7a11503353f1d5ea78b1bd3700e62a1ba423d4b1ae594ec91b5d843\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8rkfb" Jan 14 00:31:11.889511 kubelet[2886]: E0114 00:31:11.889279 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8rkfb_calico-system(c4ec9a31-66c9-4bf7-a831-6c170af7211c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8rkfb_calico-system(c4ec9a31-66c9-4bf7-a831-6c170af7211c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5eb4ec6fd7a11503353f1d5ea78b1bd3700e62a1ba423d4b1ae594ec91b5d843\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:31:12.303226 sshd[3772]: Connection closed by authenticating user root 5.187.35.21 port 29280 [preauth] Jan 14 00:31:12.303000 audit[3772]: USER_ERR pid=3772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:31:12.305903 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 00:31:12.306051 kernel: audit: type=1109 audit(1768350672.303:632): pid=3772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:31:12.309037 systemd[1]: sshd@29-91.99.0.249:22-5.187.35.21:29280.service: Deactivated successfully. Jan 14 00:31:12.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-91.99.0.249:22-5.187.35.21:29280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:12.315891 kernel: audit: type=1131 audit(1768350672.310:633): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-91.99.0.249:22-5.187.35.21:29280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:12.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-91.99.0.249:22-5.187.35.21:36078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:12.341030 systemd[1]: Started sshd@30-91.99.0.249:22-5.187.35.21:36078.service - OpenSSH per-connection server daemon (5.187.35.21:36078). Jan 14 00:31:12.346857 kernel: audit: type=1130 audit(1768350672.340:634): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-91.99.0.249:22-5.187.35.21:36078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:15.751337 sshd[4007]: Connection closed by authenticating user root 5.187.35.21 port 36078 [preauth] Jan 14 00:31:15.751000 audit[4007]: USER_ERR pid=4007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:31:15.755962 kernel: audit: type=1109 audit(1768350675.751:635): pid=4007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:31:15.758623 systemd[1]: sshd@30-91.99.0.249:22-5.187.35.21:36078.service: Deactivated successfully. Jan 14 00:31:15.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-91.99.0.249:22-5.187.35.21:36078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:15.764843 kernel: audit: type=1131 audit(1768350675.760:636): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-91.99.0.249:22-5.187.35.21:36078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:15.783431 systemd[1]: Started sshd@31-91.99.0.249:22-5.187.35.21:36096.service - OpenSSH per-connection server daemon (5.187.35.21:36096). Jan 14 00:31:15.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-91.99.0.249:22-5.187.35.21:36096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:15.787895 kernel: audit: type=1130 audit(1768350675.782:637): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-91.99.0.249:22-5.187.35.21:36096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:17.202789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount121105029.mount: Deactivated successfully. Jan 14 00:31:17.229686 containerd[1612]: time="2026-01-14T00:31:17.228726283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:31:17.249151 containerd[1612]: time="2026-01-14T00:31:17.230347992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 14 00:31:17.249151 containerd[1612]: time="2026-01-14T00:31:17.231846903Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:31:17.249422 containerd[1612]: time="2026-01-14T00:31:17.237072110Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 7.167214006s" Jan 14 00:31:17.249422 containerd[1612]: time="2026-01-14T00:31:17.249299992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 14 00:31:17.250598 containerd[1612]: time="2026-01-14T00:31:17.250514625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:31:17.278499 containerd[1612]: time="2026-01-14T00:31:17.278453247Z" level=info msg="CreateContainer within sandbox \"464d20972f0a618f7dc4ba36bdb294268305c2dc362648806d1d7b6398316a8f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 00:31:17.297139 containerd[1612]: time="2026-01-14T00:31:17.296063136Z" level=info msg="Container 52114cd913119552b81ed4ad6e83650208b60d9201788c6c8949c094d6ea2091: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:31:17.296867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1665074270.mount: Deactivated successfully. Jan 14 00:31:17.309051 containerd[1612]: time="2026-01-14T00:31:17.308874374Z" level=info msg="CreateContainer within sandbox \"464d20972f0a618f7dc4ba36bdb294268305c2dc362648806d1d7b6398316a8f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"52114cd913119552b81ed4ad6e83650208b60d9201788c6c8949c094d6ea2091\"" Jan 14 00:31:17.311044 containerd[1612]: time="2026-01-14T00:31:17.310956401Z" level=info msg="StartContainer for \"52114cd913119552b81ed4ad6e83650208b60d9201788c6c8949c094d6ea2091\"" Jan 14 00:31:17.315556 containerd[1612]: time="2026-01-14T00:31:17.315351293Z" level=info msg="connecting to shim 52114cd913119552b81ed4ad6e83650208b60d9201788c6c8949c094d6ea2091" address="unix:///run/containerd/s/70a003d5b456dc88b9f6e675d322c26dc2a5c24c7e5e0ae30460510e0c9fdbe1" protocol=ttrpc version=3 Jan 14 00:31:17.380301 systemd[1]: Started cri-containerd-52114cd913119552b81ed4ad6e83650208b60d9201788c6c8949c094d6ea2091.scope - libcontainer container 52114cd913119552b81ed4ad6e83650208b60d9201788c6c8949c094d6ea2091. Jan 14 00:31:17.484000 audit: BPF prog-id=170 op=LOAD Jan 14 00:31:17.488835 kernel: audit: type=1334 audit(1768350677.484:638): prog-id=170 op=LOAD Jan 14 00:31:17.488961 kernel: audit: type=1300 audit(1768350677.484:638): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3483 pid=4022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:17.484000 audit[4022]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3483 pid=4022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:17.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532313134636439313331313935353262383165643461643665383336 Jan 14 00:31:17.491897 kernel: audit: type=1327 audit(1768350677.484:638): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532313134636439313331313935353262383165643461643665383336 Jan 14 00:31:17.493256 kernel: audit: type=1334 audit(1768350677.485:639): prog-id=171 op=LOAD Jan 14 00:31:17.485000 audit: BPF prog-id=171 op=LOAD Jan 14 00:31:17.485000 audit[4022]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3483 pid=4022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:17.495967 kernel: audit: type=1300 audit(1768350677.485:639): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3483 pid=4022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:17.498628 kernel: audit: type=1327 audit(1768350677.485:639): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532313134636439313331313935353262383165643461643665383336 Jan 14 00:31:17.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532313134636439313331313935353262383165643461643665383336 Jan 14 00:31:17.485000 audit: BPF prog-id=171 op=UNLOAD Jan 14 00:31:17.500937 kernel: audit: type=1334 audit(1768350677.485:640): prog-id=171 op=UNLOAD Jan 14 00:31:17.503466 kernel: audit: type=1300 audit(1768350677.485:640): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3483 pid=4022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:17.485000 audit[4022]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3483 pid=4022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:17.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532313134636439313331313935353262383165643461643665383336 Jan 14 00:31:17.506835 kernel: audit: type=1327 audit(1768350677.485:640): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532313134636439313331313935353262383165643461643665383336 Jan 14 00:31:17.485000 audit: BPF prog-id=170 op=UNLOAD Jan 14 00:31:17.485000 audit[4022]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3483 pid=4022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:17.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532313134636439313331313935353262383165643461643665383336 Jan 14 00:31:17.485000 audit: BPF prog-id=172 op=LOAD Jan 14 00:31:17.485000 audit[4022]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3483 pid=4022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:17.508853 kernel: audit: type=1334 audit(1768350677.485:641): prog-id=170 op=UNLOAD Jan 14 00:31:17.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532313134636439313331313935353262383165643461643665383336 Jan 14 00:31:17.531164 containerd[1612]: time="2026-01-14T00:31:17.531111445Z" level=info msg="StartContainer for \"52114cd913119552b81ed4ad6e83650208b60d9201788c6c8949c094d6ea2091\" returns successfully" Jan 14 00:31:17.700859 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 00:31:17.701029 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 00:31:17.991856 kubelet[2886]: I0114 00:31:17.991570 2886 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e13d0bbb-aebc-452d-9c9e-69de55abbb87-whisker-ca-bundle\") pod \"e13d0bbb-aebc-452d-9c9e-69de55abbb87\" (UID: \"e13d0bbb-aebc-452d-9c9e-69de55abbb87\") " Jan 14 00:31:17.991856 kubelet[2886]: I0114 00:31:17.991723 2886 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgqzs\" (UniqueName: \"kubernetes.io/projected/e13d0bbb-aebc-452d-9c9e-69de55abbb87-kube-api-access-qgqzs\") pod \"e13d0bbb-aebc-452d-9c9e-69de55abbb87\" (UID: \"e13d0bbb-aebc-452d-9c9e-69de55abbb87\") " Jan 14 00:31:17.992468 kubelet[2886]: I0114 00:31:17.992125 2886 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e13d0bbb-aebc-452d-9c9e-69de55abbb87-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e13d0bbb-aebc-452d-9c9e-69de55abbb87" (UID: "e13d0bbb-aebc-452d-9c9e-69de55abbb87"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 00:31:17.993525 kubelet[2886]: I0114 00:31:17.992851 2886 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e13d0bbb-aebc-452d-9c9e-69de55abbb87-whisker-backend-key-pair\") pod \"e13d0bbb-aebc-452d-9c9e-69de55abbb87\" (UID: \"e13d0bbb-aebc-452d-9c9e-69de55abbb87\") " Jan 14 00:31:17.993525 kubelet[2886]: I0114 00:31:17.993030 2886 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e13d0bbb-aebc-452d-9c9e-69de55abbb87-whisker-ca-bundle\") on node \"ci-4547-0-0-n-a43761813d\" DevicePath \"\"" Jan 14 00:31:18.001135 kubelet[2886]: I0114 00:31:18.001056 2886 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13d0bbb-aebc-452d-9c9e-69de55abbb87-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e13d0bbb-aebc-452d-9c9e-69de55abbb87" (UID: "e13d0bbb-aebc-452d-9c9e-69de55abbb87"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 00:31:18.001424 kubelet[2886]: I0114 00:31:18.001404 2886 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13d0bbb-aebc-452d-9c9e-69de55abbb87-kube-api-access-qgqzs" (OuterVolumeSpecName: "kube-api-access-qgqzs") pod "e13d0bbb-aebc-452d-9c9e-69de55abbb87" (UID: "e13d0bbb-aebc-452d-9c9e-69de55abbb87"). InnerVolumeSpecName "kube-api-access-qgqzs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 00:31:18.093475 kubelet[2886]: I0114 00:31:18.093425 2886 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgqzs\" (UniqueName: \"kubernetes.io/projected/e13d0bbb-aebc-452d-9c9e-69de55abbb87-kube-api-access-qgqzs\") on node \"ci-4547-0-0-n-a43761813d\" DevicePath \"\"" Jan 14 00:31:18.093475 kubelet[2886]: I0114 00:31:18.093476 2886 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e13d0bbb-aebc-452d-9c9e-69de55abbb87-whisker-backend-key-pair\") on node \"ci-4547-0-0-n-a43761813d\" DevicePath \"\"" Jan 14 00:31:18.124879 systemd[1]: Removed slice kubepods-besteffort-pode13d0bbb_aebc_452d_9c9e_69de55abbb87.slice - libcontainer container kubepods-besteffort-pode13d0bbb_aebc_452d_9c9e_69de55abbb87.slice. Jan 14 00:31:18.176247 kubelet[2886]: I0114 00:31:18.176173 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rr9n8" podStartSLOduration=1.8680644929999999 podStartE2EDuration="19.176154302s" podCreationTimestamp="2026-01-14 00:30:59 +0000 UTC" firstStartedPulling="2026-01-14 00:30:59.943622968 +0000 UTC m=+30.339718803" lastFinishedPulling="2026-01-14 00:31:17.251712737 +0000 UTC m=+47.647808612" observedRunningTime="2026-01-14 00:31:18.145504212 +0000 UTC m=+48.541600047" watchObservedRunningTime="2026-01-14 00:31:18.176154302 +0000 UTC m=+48.572250137" Jan 14 00:31:18.202939 systemd[1]: var-lib-kubelet-pods-e13d0bbb\x2daebc\x2d452d\x2d9c9e\x2d69de55abbb87-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqgqzs.mount: Deactivated successfully. Jan 14 00:31:18.205116 systemd[1]: var-lib-kubelet-pods-e13d0bbb\x2daebc\x2d452d\x2d9c9e\x2d69de55abbb87-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 00:31:18.281135 systemd[1]: Created slice kubepods-besteffort-podef34e9a3_c694_439f_b718_c0f1c8545835.slice - libcontainer container kubepods-besteffort-podef34e9a3_c694_439f_b718_c0f1c8545835.slice. Jan 14 00:31:18.397149 kubelet[2886]: I0114 00:31:18.396495 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef34e9a3-c694-439f-b718-c0f1c8545835-whisker-ca-bundle\") pod \"whisker-66d779df8f-4d6qc\" (UID: \"ef34e9a3-c694-439f-b718-c0f1c8545835\") " pod="calico-system/whisker-66d779df8f-4d6qc" Jan 14 00:31:18.397149 kubelet[2886]: I0114 00:31:18.396558 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsdtd\" (UniqueName: \"kubernetes.io/projected/ef34e9a3-c694-439f-b718-c0f1c8545835-kube-api-access-zsdtd\") pod \"whisker-66d779df8f-4d6qc\" (UID: \"ef34e9a3-c694-439f-b718-c0f1c8545835\") " pod="calico-system/whisker-66d779df8f-4d6qc" Jan 14 00:31:18.397149 kubelet[2886]: I0114 00:31:18.396584 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ef34e9a3-c694-439f-b718-c0f1c8545835-whisker-backend-key-pair\") pod \"whisker-66d779df8f-4d6qc\" (UID: \"ef34e9a3-c694-439f-b718-c0f1c8545835\") " pod="calico-system/whisker-66d779df8f-4d6qc" Jan 14 00:31:18.588742 containerd[1612]: time="2026-01-14T00:31:18.588500196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66d779df8f-4d6qc,Uid:ef34e9a3-c694-439f-b718-c0f1c8545835,Namespace:calico-system,Attempt:0,}" Jan 14 00:31:18.882064 systemd-networkd[1497]: cali4c89d009b1c: Link UP Jan 14 00:31:18.884438 systemd-networkd[1497]: cali4c89d009b1c: Gained carrier Jan 14 00:31:18.907333 containerd[1612]: 2026-01-14 00:31:18.631 [INFO][4110] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 00:31:18.907333 containerd[1612]: 2026-01-14 00:31:18.730 [INFO][4110] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--a43761813d-k8s-whisker--66d779df8f--4d6qc-eth0 whisker-66d779df8f- calico-system ef34e9a3-c694-439f-b718-c0f1c8545835 935 0 2026-01-14 00:31:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66d779df8f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547-0-0-n-a43761813d whisker-66d779df8f-4d6qc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4c89d009b1c [] [] }} ContainerID="abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" Namespace="calico-system" Pod="whisker-66d779df8f-4d6qc" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-whisker--66d779df8f--4d6qc-" Jan 14 00:31:18.907333 containerd[1612]: 2026-01-14 00:31:18.730 [INFO][4110] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" Namespace="calico-system" Pod="whisker-66d779df8f-4d6qc" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-whisker--66d779df8f--4d6qc-eth0" Jan 14 00:31:18.907333 containerd[1612]: 2026-01-14 00:31:18.791 [INFO][4122] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" HandleID="k8s-pod-network.abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" Workload="ci--4547--0--0--n--a43761813d-k8s-whisker--66d779df8f--4d6qc-eth0" Jan 14 00:31:18.907669 containerd[1612]: 2026-01-14 00:31:18.791 [INFO][4122] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" HandleID="k8s-pod-network.abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" Workload="ci--4547--0--0--n--a43761813d-k8s-whisker--66d779df8f--4d6qc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000325730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-a43761813d", "pod":"whisker-66d779df8f-4d6qc", "timestamp":"2026-01-14 00:31:18.791098264 +0000 UTC"}, Hostname:"ci-4547-0-0-n-a43761813d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:31:18.907669 containerd[1612]: 2026-01-14 00:31:18.791 [INFO][4122] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:31:18.907669 containerd[1612]: 2026-01-14 00:31:18.791 [INFO][4122] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:31:18.907669 containerd[1612]: 2026-01-14 00:31:18.791 [INFO][4122] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-a43761813d' Jan 14 00:31:18.907669 containerd[1612]: 2026-01-14 00:31:18.810 [INFO][4122] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:18.907669 containerd[1612]: 2026-01-14 00:31:18.822 [INFO][4122] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:18.907669 containerd[1612]: 2026-01-14 00:31:18.829 [INFO][4122] ipam/ipam.go 511: Trying affinity for 192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:18.907669 containerd[1612]: 2026-01-14 00:31:18.832 [INFO][4122] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:18.907669 containerd[1612]: 2026-01-14 00:31:18.835 [INFO][4122] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:18.908118 containerd[1612]: 2026-01-14 00:31:18.836 [INFO][4122] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.0/26 handle="k8s-pod-network.abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:18.908118 containerd[1612]: 2026-01-14 00:31:18.838 [INFO][4122] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c Jan 14 00:31:18.908118 containerd[1612]: 2026-01-14 00:31:18.846 [INFO][4122] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.0/26 handle="k8s-pod-network.abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:18.908118 containerd[1612]: 2026-01-14 00:31:18.860 [INFO][4122] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.1/26] block=192.168.105.0/26 handle="k8s-pod-network.abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:18.908118 containerd[1612]: 2026-01-14 00:31:18.860 [INFO][4122] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.1/26] handle="k8s-pod-network.abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:18.908118 containerd[1612]: 2026-01-14 00:31:18.861 [INFO][4122] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:31:18.908118 containerd[1612]: 2026-01-14 00:31:18.861 [INFO][4122] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.1/26] IPv6=[] ContainerID="abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" HandleID="k8s-pod-network.abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" Workload="ci--4547--0--0--n--a43761813d-k8s-whisker--66d779df8f--4d6qc-eth0" Jan 14 00:31:18.908310 containerd[1612]: 2026-01-14 00:31:18.865 [INFO][4110] cni-plugin/k8s.go 418: Populated endpoint ContainerID="abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" Namespace="calico-system" Pod="whisker-66d779df8f-4d6qc" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-whisker--66d779df8f--4d6qc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a43761813d-k8s-whisker--66d779df8f--4d6qc-eth0", GenerateName:"whisker-66d779df8f-", Namespace:"calico-system", SelfLink:"", UID:"ef34e9a3-c694-439f-b718-c0f1c8545835", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 31, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66d779df8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a43761813d", ContainerID:"", Pod:"whisker-66d779df8f-4d6qc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.105.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4c89d009b1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:31:18.908310 containerd[1612]: 2026-01-14 00:31:18.866 [INFO][4110] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.1/32] ContainerID="abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" Namespace="calico-system" Pod="whisker-66d779df8f-4d6qc" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-whisker--66d779df8f--4d6qc-eth0" Jan 14 00:31:18.908426 containerd[1612]: 2026-01-14 00:31:18.866 [INFO][4110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c89d009b1c ContainerID="abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" Namespace="calico-system" Pod="whisker-66d779df8f-4d6qc" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-whisker--66d779df8f--4d6qc-eth0" Jan 14 00:31:18.908426 containerd[1612]: 2026-01-14 00:31:18.882 [INFO][4110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" Namespace="calico-system" Pod="whisker-66d779df8f-4d6qc" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-whisker--66d779df8f--4d6qc-eth0" Jan 14 00:31:18.908476 containerd[1612]: 2026-01-14 00:31:18.883 [INFO][4110] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" Namespace="calico-system" Pod="whisker-66d779df8f-4d6qc" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-whisker--66d779df8f--4d6qc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a43761813d-k8s-whisker--66d779df8f--4d6qc-eth0", GenerateName:"whisker-66d779df8f-", Namespace:"calico-system", SelfLink:"", UID:"ef34e9a3-c694-439f-b718-c0f1c8545835", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 31, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66d779df8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a43761813d", ContainerID:"abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c", Pod:"whisker-66d779df8f-4d6qc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.105.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4c89d009b1c", MAC:"fe:af:ef:53:f0:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:31:18.908545 containerd[1612]: 2026-01-14 00:31:18.903 [INFO][4110] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" Namespace="calico-system" Pod="whisker-66d779df8f-4d6qc" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-whisker--66d779df8f--4d6qc-eth0" Jan 14 00:31:18.969702 containerd[1612]: time="2026-01-14T00:31:18.969553042Z" level=info msg="connecting to shim abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c" address="unix:///run/containerd/s/e92bda7e53c94dacd7dc0f9ddf836dd820d65e502d2d10715037b508f5911f2d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:31:19.006196 systemd[1]: Started cri-containerd-abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c.scope - libcontainer container abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c. Jan 14 00:31:19.022000 audit: BPF prog-id=173 op=LOAD Jan 14 00:31:19.023000 audit: BPF prog-id=174 op=LOAD Jan 14 00:31:19.023000 audit[4157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4144 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:19.023000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162653235623934323462626337303662363664633763383861353136 Jan 14 00:31:19.023000 audit: BPF prog-id=174 op=UNLOAD Jan 14 00:31:19.023000 audit[4157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4144 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:19.023000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162653235623934323462626337303662363664633763383861353136 Jan 14 00:31:19.024000 audit: BPF prog-id=175 op=LOAD Jan 14 00:31:19.024000 audit[4157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4144 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:19.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162653235623934323462626337303662363664633763383861353136 Jan 14 00:31:19.024000 audit: BPF prog-id=176 op=LOAD Jan 14 00:31:19.024000 audit[4157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4144 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:19.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162653235623934323462626337303662363664633763383861353136 Jan 14 00:31:19.024000 audit: BPF prog-id=176 op=UNLOAD Jan 14 00:31:19.024000 audit[4157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4144 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:19.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162653235623934323462626337303662363664633763383861353136 Jan 14 00:31:19.024000 audit: BPF prog-id=175 op=UNLOAD Jan 14 00:31:19.024000 audit[4157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4144 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:19.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162653235623934323462626337303662363664633763383861353136 Jan 14 00:31:19.024000 audit: BPF prog-id=177 op=LOAD Jan 14 00:31:19.024000 audit[4157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4144 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:19.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162653235623934323462626337303662363664633763383861353136 Jan 14 00:31:19.054319 containerd[1612]: time="2026-01-14T00:31:19.054246967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66d779df8f-4d6qc,Uid:ef34e9a3-c694-439f-b718-c0f1c8545835,Namespace:calico-system,Attempt:0,} returns sandbox id \"abe25b9424bbc706b66dc7c88a516fb917fa5dbeb8213a0fc1ff2123e286fb4c\"" Jan 14 00:31:19.057384 containerd[1612]: time="2026-01-14T00:31:19.057317989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 00:31:19.410279 containerd[1612]: time="2026-01-14T00:31:19.409977028Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:19.411996 containerd[1612]: time="2026-01-14T00:31:19.411911136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:19.412717 containerd[1612]: time="2026-01-14T00:31:19.411891536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 00:31:19.415569 kubelet[2886]: E0114 00:31:19.415492 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:31:19.415569 kubelet[2886]: E0114 00:31:19.415583 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:31:19.424212 kubelet[2886]: E0114 00:31:19.424079 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:512ab7b5f96042dca2636ff671657425,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zsdtd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d779df8f-4d6qc_calico-system(ef34e9a3-c694-439f-b718-c0f1c8545835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:19.426867 containerd[1612]: time="2026-01-14T00:31:19.426608528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 00:31:19.476341 sshd[4017]: Connection closed by authenticating user root 5.187.35.21 port 36096 [preauth] Jan 14 00:31:19.476000 audit[4017]: USER_ERR pid=4017 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:31:19.481786 systemd[1]: sshd@31-91.99.0.249:22-5.187.35.21:36096.service: Deactivated successfully. Jan 14 00:31:19.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-91.99.0.249:22-5.187.35.21:36096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:19.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-91.99.0.249:22-5.187.35.21:36132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:19.528421 systemd[1]: Started sshd@32-91.99.0.249:22-5.187.35.21:36132.service - OpenSSH per-connection server daemon (5.187.35.21:36132). Jan 14 00:31:19.770589 kubelet[2886]: I0114 00:31:19.770253 2886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e13d0bbb-aebc-452d-9c9e-69de55abbb87" path="/var/lib/kubelet/pods/e13d0bbb-aebc-452d-9c9e-69de55abbb87/volumes" Jan 14 00:31:19.781607 containerd[1612]: time="2026-01-14T00:31:19.781531833Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:19.793153 containerd[1612]: time="2026-01-14T00:31:19.792718126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 00:31:19.793607 containerd[1612]: time="2026-01-14T00:31:19.793131163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:19.794277 kubelet[2886]: E0114 00:31:19.793789 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:31:19.794418 kubelet[2886]: E0114 00:31:19.794287 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:31:19.795046 kubelet[2886]: E0114 00:31:19.794952 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsdtd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d779df8f-4d6qc_calico-system(ef34e9a3-c694-439f-b718-c0f1c8545835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:19.797142 kubelet[2886]: E0114 00:31:19.797046 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d779df8f-4d6qc" podUID="ef34e9a3-c694-439f-b718-c0f1c8545835" Jan 14 00:31:20.026752 systemd-networkd[1497]: cali4c89d009b1c: Gained IPv6LL Jan 14 00:31:20.149907 kubelet[2886]: E0114 00:31:20.149355 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d779df8f-4d6qc" podUID="ef34e9a3-c694-439f-b718-c0f1c8545835" Jan 14 00:31:20.178000 audit[4307]: NETFILTER_CFG table=filter:121 family=2 entries=22 op=nft_register_rule pid=4307 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:20.178000 audit[4307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc8209140 a2=0 a3=1 items=0 ppid=3053 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:20.178000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:20.185000 audit[4307]: NETFILTER_CFG table=nat:122 family=2 entries=12 op=nft_register_rule pid=4307 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:20.185000 audit[4307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc8209140 a2=0 a3=1 items=0 ppid=3053 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:20.185000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:21.770965 containerd[1612]: time="2026-01-14T00:31:21.770127185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b56c4d86d-jdxwl,Uid:900e30be-5423-4b2c-9623-a51920c0a748,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:31:21.774392 containerd[1612]: time="2026-01-14T00:31:21.774297201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkl7t,Uid:4dd4a077-4753-4782-8d25-a0a09436f34f,Namespace:kube-system,Attempt:0,}" Jan 14 00:31:22.056079 systemd-networkd[1497]: calic71946726ba: Link UP Jan 14 00:31:22.057056 systemd-networkd[1497]: calic71946726ba: Gained carrier Jan 14 00:31:22.093001 containerd[1612]: 2026-01-14 00:31:21.862 [INFO][4331] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 00:31:22.093001 containerd[1612]: 2026-01-14 00:31:21.896 [INFO][4331] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--jdxwl-eth0 calico-apiserver-6b56c4d86d- calico-apiserver 900e30be-5423-4b2c-9623-a51920c0a748 872 0 2026-01-14 00:30:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b56c4d86d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-n-a43761813d calico-apiserver-6b56c4d86d-jdxwl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic71946726ba [] [] }} ContainerID="38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" Namespace="calico-apiserver" Pod="calico-apiserver-6b56c4d86d-jdxwl" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--jdxwl-" Jan 14 00:31:22.093001 containerd[1612]: 2026-01-14 00:31:21.896 [INFO][4331] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" Namespace="calico-apiserver" Pod="calico-apiserver-6b56c4d86d-jdxwl" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--jdxwl-eth0" Jan 14 00:31:22.093001 containerd[1612]: 2026-01-14 00:31:21.961 [INFO][4354] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" HandleID="k8s-pod-network.38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" Workload="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--jdxwl-eth0" Jan 14 00:31:22.093396 containerd[1612]: 2026-01-14 00:31:21.962 [INFO][4354] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" HandleID="k8s-pod-network.38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" Workload="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--jdxwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034f4d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-n-a43761813d", "pod":"calico-apiserver-6b56c4d86d-jdxwl", "timestamp":"2026-01-14 00:31:21.961501693 +0000 UTC"}, Hostname:"ci-4547-0-0-n-a43761813d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:31:22.093396 containerd[1612]: 2026-01-14 00:31:21.962 [INFO][4354] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:31:22.093396 containerd[1612]: 2026-01-14 00:31:21.962 [INFO][4354] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:31:22.093396 containerd[1612]: 2026-01-14 00:31:21.962 [INFO][4354] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-a43761813d' Jan 14 00:31:22.093396 containerd[1612]: 2026-01-14 00:31:21.985 [INFO][4354] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.093396 containerd[1612]: 2026-01-14 00:31:21.995 [INFO][4354] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.093396 containerd[1612]: 2026-01-14 00:31:22.004 [INFO][4354] ipam/ipam.go 511: Trying affinity for 192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.093396 containerd[1612]: 2026-01-14 00:31:22.007 [INFO][4354] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.093396 containerd[1612]: 2026-01-14 00:31:22.011 [INFO][4354] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.093773 containerd[1612]: 2026-01-14 00:31:22.012 [INFO][4354] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.0/26 handle="k8s-pod-network.38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.093773 containerd[1612]: 2026-01-14 00:31:22.018 [INFO][4354] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713 Jan 14 00:31:22.093773 containerd[1612]: 2026-01-14 00:31:22.029 [INFO][4354] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.0/26 handle="k8s-pod-network.38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.093773 containerd[1612]: 2026-01-14 00:31:22.040 [INFO][4354] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.2/26] block=192.168.105.0/26 handle="k8s-pod-network.38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.093773 containerd[1612]: 2026-01-14 00:31:22.041 [INFO][4354] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.2/26] handle="k8s-pod-network.38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.093773 containerd[1612]: 2026-01-14 00:31:22.041 [INFO][4354] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:31:22.093773 containerd[1612]: 2026-01-14 00:31:22.041 [INFO][4354] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.2/26] IPv6=[] ContainerID="38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" HandleID="k8s-pod-network.38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" Workload="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--jdxwl-eth0" Jan 14 00:31:22.094578 containerd[1612]: 2026-01-14 00:31:22.045 [INFO][4331] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" Namespace="calico-apiserver" Pod="calico-apiserver-6b56c4d86d-jdxwl" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--jdxwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--jdxwl-eth0", GenerateName:"calico-apiserver-6b56c4d86d-", Namespace:"calico-apiserver", SelfLink:"", UID:"900e30be-5423-4b2c-9623-a51920c0a748", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b56c4d86d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a43761813d", ContainerID:"", Pod:"calico-apiserver-6b56c4d86d-jdxwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic71946726ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:31:22.094652 containerd[1612]: 2026-01-14 00:31:22.046 [INFO][4331] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.2/32] ContainerID="38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" Namespace="calico-apiserver" Pod="calico-apiserver-6b56c4d86d-jdxwl" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--jdxwl-eth0" Jan 14 00:31:22.094652 containerd[1612]: 2026-01-14 00:31:22.046 [INFO][4331] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic71946726ba ContainerID="38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" Namespace="calico-apiserver" Pod="calico-apiserver-6b56c4d86d-jdxwl" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--jdxwl-eth0" Jan 14 00:31:22.094652 containerd[1612]: 2026-01-14 00:31:22.056 [INFO][4331] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" Namespace="calico-apiserver" Pod="calico-apiserver-6b56c4d86d-jdxwl" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--jdxwl-eth0" Jan 14 00:31:22.094749 containerd[1612]: 2026-01-14 00:31:22.058 [INFO][4331] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" Namespace="calico-apiserver" Pod="calico-apiserver-6b56c4d86d-jdxwl" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--jdxwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--jdxwl-eth0", GenerateName:"calico-apiserver-6b56c4d86d-", Namespace:"calico-apiserver", SelfLink:"", UID:"900e30be-5423-4b2c-9623-a51920c0a748", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b56c4d86d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a43761813d", ContainerID:"38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713", Pod:"calico-apiserver-6b56c4d86d-jdxwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic71946726ba", MAC:"c2:9f:0e:ca:32:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:31:22.094803 containerd[1612]: 2026-01-14 00:31:22.090 [INFO][4331] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" Namespace="calico-apiserver" Pod="calico-apiserver-6b56c4d86d-jdxwl" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--jdxwl-eth0" Jan 14 00:31:22.131255 containerd[1612]: time="2026-01-14T00:31:22.130950585Z" level=info msg="connecting to shim 38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713" address="unix:///run/containerd/s/ed1826dd740cb5736fa526d5eef3476ffb619db66f9b7bfe8d233613e18cd67d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:31:22.185563 systemd-networkd[1497]: cali5367e249f8c: Link UP Jan 14 00:31:22.188277 systemd-networkd[1497]: cali5367e249f8c: Gained carrier Jan 14 00:31:22.211410 systemd[1]: Started cri-containerd-38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713.scope - libcontainer container 38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713. Jan 14 00:31:22.228156 containerd[1612]: 2026-01-14 00:31:21.901 [INFO][4338] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 00:31:22.228156 containerd[1612]: 2026-01-14 00:31:21.934 [INFO][4338] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--jkl7t-eth0 coredns-674b8bbfcf- kube-system 4dd4a077-4753-4782-8d25-a0a09436f34f 877 0 2026-01-14 00:30:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-n-a43761813d coredns-674b8bbfcf-jkl7t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5367e249f8c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkl7t" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--jkl7t-" Jan 14 00:31:22.228156 containerd[1612]: 2026-01-14 00:31:21.935 [INFO][4338] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkl7t" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--jkl7t-eth0" Jan 14 00:31:22.228156 containerd[1612]: 2026-01-14 00:31:21.999 [INFO][4359] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" HandleID="k8s-pod-network.b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" Workload="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--jkl7t-eth0" Jan 14 00:31:22.228947 containerd[1612]: 2026-01-14 00:31:22.000 [INFO][4359] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" HandleID="k8s-pod-network.b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" Workload="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--jkl7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000331d70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-n-a43761813d", "pod":"coredns-674b8bbfcf-jkl7t", "timestamp":"2026-01-14 00:31:21.999790235 +0000 UTC"}, Hostname:"ci-4547-0-0-n-a43761813d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:31:22.228947 containerd[1612]: 2026-01-14 00:31:22.000 [INFO][4359] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:31:22.228947 containerd[1612]: 2026-01-14 00:31:22.041 [INFO][4359] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:31:22.228947 containerd[1612]: 2026-01-14 00:31:22.041 [INFO][4359] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-a43761813d' Jan 14 00:31:22.228947 containerd[1612]: 2026-01-14 00:31:22.088 [INFO][4359] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.228947 containerd[1612]: 2026-01-14 00:31:22.100 [INFO][4359] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.228947 containerd[1612]: 2026-01-14 00:31:22.108 [INFO][4359] ipam/ipam.go 511: Trying affinity for 192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.228947 containerd[1612]: 2026-01-14 00:31:22.116 [INFO][4359] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.228947 containerd[1612]: 2026-01-14 00:31:22.127 [INFO][4359] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.229395 containerd[1612]: 2026-01-14 00:31:22.129 [INFO][4359] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.0/26 handle="k8s-pod-network.b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.229395 containerd[1612]: 2026-01-14 00:31:22.135 [INFO][4359] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a Jan 14 00:31:22.229395 containerd[1612]: 2026-01-14 00:31:22.152 [INFO][4359] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.0/26 handle="k8s-pod-network.b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.229395 containerd[1612]: 2026-01-14 00:31:22.162 [INFO][4359] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.3/26] block=192.168.105.0/26 handle="k8s-pod-network.b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.229395 containerd[1612]: 2026-01-14 00:31:22.162 [INFO][4359] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.3/26] handle="k8s-pod-network.b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:22.229395 containerd[1612]: 2026-01-14 00:31:22.162 [INFO][4359] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:31:22.229395 containerd[1612]: 2026-01-14 00:31:22.162 [INFO][4359] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.3/26] IPv6=[] ContainerID="b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" HandleID="k8s-pod-network.b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" Workload="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--jkl7t-eth0" Jan 14 00:31:22.231717 containerd[1612]: 2026-01-14 00:31:22.171 [INFO][4338] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkl7t" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--jkl7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--jkl7t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4dd4a077-4753-4782-8d25-a0a09436f34f", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 30, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a43761813d", ContainerID:"", Pod:"coredns-674b8bbfcf-jkl7t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5367e249f8c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:31:22.231717 containerd[1612]: 2026-01-14 00:31:22.171 [INFO][4338] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.3/32] ContainerID="b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkl7t" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--jkl7t-eth0" Jan 14 00:31:22.231717 containerd[1612]: 2026-01-14 00:31:22.172 [INFO][4338] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5367e249f8c ContainerID="b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkl7t" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--jkl7t-eth0" Jan 14 00:31:22.231717 containerd[1612]: 2026-01-14 00:31:22.190 [INFO][4338] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkl7t" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--jkl7t-eth0" Jan 14 00:31:22.231717 containerd[1612]: 2026-01-14 00:31:22.193 [INFO][4338] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkl7t" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--jkl7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--jkl7t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4dd4a077-4753-4782-8d25-a0a09436f34f", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 30, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a43761813d", ContainerID:"b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a", Pod:"coredns-674b8bbfcf-jkl7t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5367e249f8c", MAC:"7a:e3:42:a9:72:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:31:22.231717 containerd[1612]: 2026-01-14 00:31:22.216 [INFO][4338] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkl7t" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--jkl7t-eth0" Jan 14 00:31:22.321000 audit: BPF prog-id=178 op=LOAD Jan 14 00:31:22.323000 audit: BPF prog-id=179 op=LOAD Jan 14 00:31:22.323000 audit[4396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4384 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338653932636362393831333164313563376131396138386437343866 Jan 14 00:31:22.324000 audit: BPF prog-id=179 op=UNLOAD Jan 14 00:31:22.324000 audit[4396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4384 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.324000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338653932636362393831333164313563376131396138386437343866 Jan 14 00:31:22.325000 audit: BPF prog-id=180 op=LOAD Jan 14 00:31:22.325000 audit[4396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4384 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338653932636362393831333164313563376131396138386437343866 Jan 14 00:31:22.327000 audit: BPF prog-id=181 op=LOAD Jan 14 00:31:22.327000 audit[4396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4384 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338653932636362393831333164313563376131396138386437343866 Jan 14 00:31:22.327000 audit: BPF prog-id=181 op=UNLOAD Jan 14 00:31:22.327000 audit[4396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4384 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338653932636362393831333164313563376131396138386437343866 Jan 14 00:31:22.327000 audit: BPF prog-id=180 op=UNLOAD Jan 14 00:31:22.327000 audit[4396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4384 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338653932636362393831333164313563376131396138386437343866 Jan 14 00:31:22.327000 audit: BPF prog-id=182 op=LOAD Jan 14 00:31:22.327000 audit[4396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4384 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338653932636362393831333164313563376131396138386437343866 Jan 14 00:31:22.356834 containerd[1612]: time="2026-01-14T00:31:22.356519211Z" level=info msg="connecting to shim b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a" address="unix:///run/containerd/s/d70f6026b945f8e109ff9f8d0d5bc049e7f0b67f6541cb049285f49b69d15e17" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:31:22.411434 systemd[1]: Started cri-containerd-b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a.scope - libcontainer container b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a. Jan 14 00:31:22.419612 sshd[4251]: Invalid user admin from 5.187.35.21 port 36132 Jan 14 00:31:22.428839 containerd[1612]: time="2026-01-14T00:31:22.426627981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b56c4d86d-jdxwl,Uid:900e30be-5423-4b2c-9623-a51920c0a748,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"38e92ccb98131d15c7a19a88d748fcd6e685fbcba7d82b6b6851a7fee8e05713\"" Jan 14 00:31:22.437483 containerd[1612]: time="2026-01-14T00:31:22.437155203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:31:22.455000 audit: BPF prog-id=183 op=LOAD Jan 14 00:31:22.456000 audit: BPF prog-id=184 op=LOAD Jan 14 00:31:22.456000 audit[4456]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4445 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237383236653164336465633964616635306139336235636466613966 Jan 14 00:31:22.458000 audit: BPF prog-id=184 op=UNLOAD Jan 14 00:31:22.458000 audit[4456]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237383236653164336465633964616635306139336235636466613966 Jan 14 00:31:22.458000 audit: BPF prog-id=185 op=LOAD Jan 14 00:31:22.458000 audit[4456]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4445 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237383236653164336465633964616635306139336235636466613966 Jan 14 00:31:22.458000 audit: BPF prog-id=186 op=LOAD Jan 14 00:31:22.458000 audit[4456]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4445 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237383236653164336465633964616635306139336235636466613966 Jan 14 00:31:22.459000 audit: BPF prog-id=186 op=UNLOAD Jan 14 00:31:22.459000 audit[4456]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237383236653164336465633964616635306139336235636466613966 Jan 14 00:31:22.459000 audit: BPF prog-id=185 op=UNLOAD Jan 14 00:31:22.459000 audit[4456]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237383236653164336465633964616635306139336235636466613966 Jan 14 00:31:22.459000 audit: BPF prog-id=187 op=LOAD Jan 14 00:31:22.459000 audit[4456]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4445 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237383236653164336465633964616635306139336235636466613966 Jan 14 00:31:22.546552 containerd[1612]: time="2026-01-14T00:31:22.546502475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkl7t,Uid:4dd4a077-4753-4782-8d25-a0a09436f34f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a\"" Jan 14 00:31:22.556004 containerd[1612]: time="2026-01-14T00:31:22.555734343Z" level=info msg="CreateContainer within sandbox \"b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 00:31:22.659524 containerd[1612]: time="2026-01-14T00:31:22.655205110Z" level=info msg="Container d524f243fd4f7551f9610c89e763cc886d7d1c7cec56452fab6938e1ca05c414: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:31:22.683837 containerd[1612]: time="2026-01-14T00:31:22.683730392Z" level=info msg="CreateContainer within sandbox \"b7826e1d3dec9daf50a93b5cdfa9fcc9eafc696b0a8aca96857633907b5e250a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d524f243fd4f7551f9610c89e763cc886d7d1c7cec56452fab6938e1ca05c414\"" Jan 14 00:31:22.685055 containerd[1612]: time="2026-01-14T00:31:22.684796546Z" level=info msg="StartContainer for \"d524f243fd4f7551f9610c89e763cc886d7d1c7cec56452fab6938e1ca05c414\"" Jan 14 00:31:22.686488 containerd[1612]: time="2026-01-14T00:31:22.686446937Z" level=info msg="connecting to shim d524f243fd4f7551f9610c89e763cc886d7d1c7cec56452fab6938e1ca05c414" address="unix:///run/containerd/s/d70f6026b945f8e109ff9f8d0d5bc049e7f0b67f6541cb049285f49b69d15e17" protocol=ttrpc version=3 Jan 14 00:31:22.710286 systemd[1]: Started cri-containerd-d524f243fd4f7551f9610c89e763cc886d7d1c7cec56452fab6938e1ca05c414.scope - libcontainer container d524f243fd4f7551f9610c89e763cc886d7d1c7cec56452fab6938e1ca05c414. Jan 14 00:31:22.732000 audit: BPF prog-id=188 op=LOAD Jan 14 00:31:22.734082 kernel: kauditd_printk_skb: 80 callbacks suppressed Jan 14 00:31:22.734133 kernel: audit: type=1334 audit(1768350682.732:672): prog-id=188 op=LOAD Jan 14 00:31:22.734000 audit: BPF prog-id=189 op=LOAD Jan 14 00:31:22.736603 kernel: audit: type=1334 audit(1768350682.734:673): prog-id=189 op=LOAD Jan 14 00:31:22.736653 kernel: audit: type=1300 audit(1768350682.734:673): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000206180 a2=98 a3=0 items=0 ppid=4445 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.734000 audit[4495]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000206180 a2=98 a3=0 items=0 ppid=4445 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.738909 kernel: audit: type=1327 audit(1768350682.734:673): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435323466323433666434663735353166393631306338396537363363 Jan 14 00:31:22.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435323466323433666434663735353166393631306338396537363363 Jan 14 00:31:22.734000 audit: BPF prog-id=189 op=UNLOAD Jan 14 00:31:22.734000 audit[4495]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.743826 kernel: audit: type=1334 audit(1768350682.734:674): prog-id=189 op=UNLOAD Jan 14 00:31:22.743927 kernel: audit: type=1300 audit(1768350682.734:674): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.743957 kernel: audit: type=1327 audit(1768350682.734:674): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435323466323433666434663735353166393631306338396537363363 Jan 14 00:31:22.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435323466323433666434663735353166393631306338396537363363 Jan 14 00:31:22.734000 audit: BPF prog-id=190 op=LOAD Jan 14 00:31:22.734000 audit[4495]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002063e8 a2=98 a3=0 items=0 ppid=4445 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.749685 kernel: audit: type=1334 audit(1768350682.734:675): prog-id=190 op=LOAD Jan 14 00:31:22.749761 kernel: audit: type=1300 audit(1768350682.734:675): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002063e8 a2=98 a3=0 items=0 ppid=4445 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.749941 kernel: audit: type=1327 audit(1768350682.734:675): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435323466323433666434663735353166393631306338396537363363 Jan 14 00:31:22.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435323466323433666434663735353166393631306338396537363363 Jan 14 00:31:22.738000 audit: BPF prog-id=191 op=LOAD Jan 14 00:31:22.738000 audit[4495]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000206168 a2=98 a3=0 items=0 ppid=4445 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435323466323433666434663735353166393631306338396537363363 Jan 14 00:31:22.740000 audit: BPF prog-id=191 op=UNLOAD Jan 14 00:31:22.740000 audit[4495]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435323466323433666434663735353166393631306338396537363363 Jan 14 00:31:22.740000 audit: BPF prog-id=190 op=UNLOAD Jan 14 00:31:22.740000 audit[4495]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435323466323433666434663735353166393631306338396537363363 Jan 14 00:31:22.740000 audit: BPF prog-id=192 op=LOAD Jan 14 00:31:22.740000 audit[4495]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000206648 a2=98 a3=0 items=0 ppid=4445 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:22.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435323466323433666434663735353166393631306338396537363363 Jan 14 00:31:22.768185 containerd[1612]: time="2026-01-14T00:31:22.767925764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b56c4d86d-wv9l2,Uid:e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:31:22.769011 containerd[1612]: time="2026-01-14T00:31:22.768978158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nsbsf,Uid:67055487-2b15-4e2a-8975-7fee787b4309,Namespace:calico-system,Attempt:0,}" Jan 14 00:31:22.778308 containerd[1612]: time="2026-01-14T00:31:22.777975868Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:22.784882 containerd[1612]: time="2026-01-14T00:31:22.782082725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:31:22.787431 kubelet[2886]: E0114 00:31:22.787365 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:31:22.787431 kubelet[2886]: E0114 00:31:22.787426 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:31:22.788010 kubelet[2886]: E0114 00:31:22.787635 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvhph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b56c4d86d-jdxwl_calico-apiserver(900e30be-5423-4b2c-9623-a51920c0a748): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:22.788756 containerd[1612]: time="2026-01-14T00:31:22.788669368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:22.789890 kubelet[2886]: E0114 00:31:22.789738 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" podUID="900e30be-5423-4b2c-9623-a51920c0a748" Jan 14 00:31:22.811058 containerd[1612]: time="2026-01-14T00:31:22.810083569Z" level=info msg="StartContainer for \"d524f243fd4f7551f9610c89e763cc886d7d1c7cec56452fab6938e1ca05c414\" returns successfully" Jan 14 00:31:23.145208 sshd[4251]: Connection closed by invalid user admin 5.187.35.21 port 36132 [preauth] Jan 14 00:31:23.143000 audit[4251]: USER_ERR pid=4251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:31:23.151031 systemd[1]: sshd@32-91.99.0.249:22-5.187.35.21:36132.service: Deactivated successfully. Jan 14 00:31:23.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-91.99.0.249:22-5.187.35.21:36132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:23.151319 systemd-networkd[1497]: cali356715d1275: Link UP Jan 14 00:31:23.157336 systemd-networkd[1497]: cali356715d1275: Gained carrier Jan 14 00:31:23.165277 kubelet[2886]: E0114 00:31:23.164992 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" podUID="900e30be-5423-4b2c-9623-a51920c0a748" Jan 14 00:31:23.177772 systemd[1]: Started sshd@33-91.99.0.249:22-5.187.35.21:36428.service - OpenSSH per-connection server daemon (5.187.35.21:36428). Jan 14 00:31:23.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-91.99.0.249:22-5.187.35.21:36428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:22.892 [INFO][4531] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:22.929 [INFO][4531] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--a43761813d-k8s-goldmane--666569f655--nsbsf-eth0 goldmane-666569f655- calico-system 67055487-2b15-4e2a-8975-7fee787b4309 875 0 2026-01-14 00:30:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547-0-0-n-a43761813d goldmane-666569f655-nsbsf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali356715d1275 [] [] }} ContainerID="0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" Namespace="calico-system" Pod="goldmane-666569f655-nsbsf" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-goldmane--666569f655--nsbsf-" Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:22.930 [INFO][4531] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" Namespace="calico-system" Pod="goldmane-666569f655-nsbsf" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-goldmane--666569f655--nsbsf-eth0" Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.027 [INFO][4550] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" HandleID="k8s-pod-network.0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" Workload="ci--4547--0--0--n--a43761813d-k8s-goldmane--666569f655--nsbsf-eth0" Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.030 [INFO][4550] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" HandleID="k8s-pod-network.0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" Workload="ci--4547--0--0--n--a43761813d-k8s-goldmane--666569f655--nsbsf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b7050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-a43761813d", "pod":"goldmane-666569f655-nsbsf", "timestamp":"2026-01-14 00:31:23.027155366 +0000 UTC"}, Hostname:"ci-4547-0-0-n-a43761813d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.030 [INFO][4550] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.031 [INFO][4550] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.031 [INFO][4550] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-a43761813d' Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.055 [INFO][4550] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.074 [INFO][4550] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.086 [INFO][4550] ipam/ipam.go 511: Trying affinity for 192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.091 [INFO][4550] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.096 [INFO][4550] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.096 [INFO][4550] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.0/26 handle="k8s-pod-network.0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.099 [INFO][4550] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.119 [INFO][4550] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.0/26 handle="k8s-pod-network.0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.132 [INFO][4550] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.4/26] block=192.168.105.0/26 handle="k8s-pod-network.0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.132 [INFO][4550] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.4/26] handle="k8s-pod-network.0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.132 [INFO][4550] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:31:23.234645 containerd[1612]: 2026-01-14 00:31:23.132 [INFO][4550] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.4/26] IPv6=[] ContainerID="0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" HandleID="k8s-pod-network.0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" Workload="ci--4547--0--0--n--a43761813d-k8s-goldmane--666569f655--nsbsf-eth0" Jan 14 00:31:23.235918 containerd[1612]: 2026-01-14 00:31:23.138 [INFO][4531] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" Namespace="calico-system" Pod="goldmane-666569f655-nsbsf" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-goldmane--666569f655--nsbsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a43761813d-k8s-goldmane--666569f655--nsbsf-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"67055487-2b15-4e2a-8975-7fee787b4309", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 30, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a43761813d", ContainerID:"", Pod:"goldmane-666569f655-nsbsf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali356715d1275", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:31:23.235918 containerd[1612]: 2026-01-14 00:31:23.138 [INFO][4531] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.4/32] ContainerID="0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" Namespace="calico-system" Pod="goldmane-666569f655-nsbsf" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-goldmane--666569f655--nsbsf-eth0" Jan 14 00:31:23.235918 containerd[1612]: 2026-01-14 00:31:23.138 [INFO][4531] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali356715d1275 ContainerID="0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" Namespace="calico-system" Pod="goldmane-666569f655-nsbsf" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-goldmane--666569f655--nsbsf-eth0" Jan 14 00:31:23.235918 containerd[1612]: 2026-01-14 00:31:23.152 [INFO][4531] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" Namespace="calico-system" Pod="goldmane-666569f655-nsbsf" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-goldmane--666569f655--nsbsf-eth0" Jan 14 00:31:23.235918 containerd[1612]: 2026-01-14 00:31:23.165 [INFO][4531] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" Namespace="calico-system" Pod="goldmane-666569f655-nsbsf" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-goldmane--666569f655--nsbsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a43761813d-k8s-goldmane--666569f655--nsbsf-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"67055487-2b15-4e2a-8975-7fee787b4309", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 30, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a43761813d", ContainerID:"0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a", Pod:"goldmane-666569f655-nsbsf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali356715d1275", MAC:"c2:b9:fa:fd:f2:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:31:23.235918 containerd[1612]: 2026-01-14 00:31:23.225 [INFO][4531] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" Namespace="calico-system" Pod="goldmane-666569f655-nsbsf" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-goldmane--666569f655--nsbsf-eth0" Jan 14 00:31:23.243919 kubelet[2886]: I0114 00:31:23.240751 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jkl7t" podStartSLOduration=48.226332247 podStartE2EDuration="48.226332247s" podCreationTimestamp="2026-01-14 00:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:31:23.224450857 +0000 UTC m=+53.620546692" watchObservedRunningTime="2026-01-14 00:31:23.226332247 +0000 UTC m=+53.622428202" Jan 14 00:31:23.298115 containerd[1612]: time="2026-01-14T00:31:23.297803820Z" level=info msg="connecting to shim 0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a" address="unix:///run/containerd/s/518b943421fa681d502fac44bd2b6c57eb1c9f8e47bcd2d94d904d024d44e636" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:31:23.330000 audit[4594]: NETFILTER_CFG table=filter:123 family=2 entries=22 op=nft_register_rule pid=4594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:23.330000 audit[4594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff754ac20 a2=0 a3=1 items=0 ppid=3053 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.330000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:23.337000 audit[4594]: NETFILTER_CFG table=nat:124 family=2 entries=12 op=nft_register_rule pid=4594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:23.337000 audit[4594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff754ac20 a2=0 a3=1 items=0 ppid=3053 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.337000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:23.377165 systemd-networkd[1497]: cali55a4c4421b2: Link UP Jan 14 00:31:23.381704 systemd-networkd[1497]: cali55a4c4421b2: Gained carrier Jan 14 00:31:23.400355 systemd[1]: Started cri-containerd-0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a.scope - libcontainer container 0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a. Jan 14 00:31:23.412000 audit[4618]: NETFILTER_CFG table=filter:125 family=2 entries=19 op=nft_register_rule pid=4618 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:23.412000 audit[4618]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffea9a9260 a2=0 a3=1 items=0 ppid=3053 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.412000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:22.885 [INFO][4521] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:22.921 [INFO][4521] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--wv9l2-eth0 calico-apiserver-6b56c4d86d- calico-apiserver e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a 870 0 2026-01-14 00:30:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b56c4d86d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-n-a43761813d calico-apiserver-6b56c4d86d-wv9l2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali55a4c4421b2 [] [] }} ContainerID="ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" Namespace="calico-apiserver" Pod="calico-apiserver-6b56c4d86d-wv9l2" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--wv9l2-" Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:22.928 [INFO][4521] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" Namespace="calico-apiserver" Pod="calico-apiserver-6b56c4d86d-wv9l2" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--wv9l2-eth0" Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.035 [INFO][4556] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" HandleID="k8s-pod-network.ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" Workload="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--wv9l2-eth0" Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.035 [INFO][4556] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" HandleID="k8s-pod-network.ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" Workload="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--wv9l2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-n-a43761813d", "pod":"calico-apiserver-6b56c4d86d-wv9l2", "timestamp":"2026-01-14 00:31:23.035418921 +0000 UTC"}, Hostname:"ci-4547-0-0-n-a43761813d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.035 [INFO][4556] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.133 [INFO][4556] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.133 [INFO][4556] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-a43761813d' Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.202 [INFO][4556] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.233 [INFO][4556] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.257 [INFO][4556] ipam/ipam.go 511: Trying affinity for 192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.275 [INFO][4556] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.300 [INFO][4556] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.301 [INFO][4556] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.0/26 handle="k8s-pod-network.ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.314 [INFO][4556] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59 Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.327 [INFO][4556] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.0/26 handle="k8s-pod-network.ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.357 [INFO][4556] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.5/26] block=192.168.105.0/26 handle="k8s-pod-network.ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.357 [INFO][4556] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.5/26] handle="k8s-pod-network.ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.357 [INFO][4556] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:31:23.423700 containerd[1612]: 2026-01-14 00:31:23.357 [INFO][4556] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.5/26] IPv6=[] ContainerID="ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" HandleID="k8s-pod-network.ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" Workload="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--wv9l2-eth0" Jan 14 00:31:23.425111 containerd[1612]: 2026-01-14 00:31:23.368 [INFO][4521] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" Namespace="calico-apiserver" Pod="calico-apiserver-6b56c4d86d-wv9l2" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--wv9l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--wv9l2-eth0", GenerateName:"calico-apiserver-6b56c4d86d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b56c4d86d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a43761813d", ContainerID:"", Pod:"calico-apiserver-6b56c4d86d-wv9l2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali55a4c4421b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:31:23.425111 containerd[1612]: 2026-01-14 00:31:23.368 [INFO][4521] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.5/32] ContainerID="ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" Namespace="calico-apiserver" Pod="calico-apiserver-6b56c4d86d-wv9l2" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--wv9l2-eth0" Jan 14 00:31:23.425111 containerd[1612]: 2026-01-14 00:31:23.368 [INFO][4521] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55a4c4421b2 ContainerID="ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" Namespace="calico-apiserver" Pod="calico-apiserver-6b56c4d86d-wv9l2" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--wv9l2-eth0" Jan 14 00:31:23.425111 containerd[1612]: 2026-01-14 00:31:23.395 [INFO][4521] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" Namespace="calico-apiserver" Pod="calico-apiserver-6b56c4d86d-wv9l2" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--wv9l2-eth0" Jan 14 00:31:23.425111 containerd[1612]: 2026-01-14 00:31:23.397 [INFO][4521] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" Namespace="calico-apiserver" Pod="calico-apiserver-6b56c4d86d-wv9l2" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--wv9l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--wv9l2-eth0", GenerateName:"calico-apiserver-6b56c4d86d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b56c4d86d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a43761813d", ContainerID:"ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59", Pod:"calico-apiserver-6b56c4d86d-wv9l2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali55a4c4421b2", MAC:"22:5c:89:5a:36:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:31:23.425111 containerd[1612]: 2026-01-14 00:31:23.418 [INFO][4521] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" Namespace="calico-apiserver" Pod="calico-apiserver-6b56c4d86d-wv9l2" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--apiserver--6b56c4d86d--wv9l2-eth0" Jan 14 00:31:23.432000 audit[4618]: NETFILTER_CFG table=nat:126 family=2 entries=33 op=nft_register_chain pid=4618 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:23.432000 audit[4618]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13428 a0=3 a1=ffffea9a9260 a2=0 a3=1 items=0 ppid=3053 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.432000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:23.441000 audit: BPF prog-id=193 op=LOAD Jan 14 00:31:23.443000 audit: BPF prog-id=194 op=LOAD Jan 14 00:31:23.443000 audit[4600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4587 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035333164666261363734663766653539306461366535376163663663 Jan 14 00:31:23.444000 audit: BPF prog-id=194 op=UNLOAD Jan 14 00:31:23.444000 audit[4600]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035333164666261363734663766653539306461366535376163663663 Jan 14 00:31:23.444000 audit: BPF prog-id=195 op=LOAD Jan 14 00:31:23.444000 audit[4600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4587 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035333164666261363734663766653539306461366535376163663663 Jan 14 00:31:23.444000 audit: BPF prog-id=196 op=LOAD Jan 14 00:31:23.444000 audit[4600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4587 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035333164666261363734663766653539306461366535376163663663 Jan 14 00:31:23.445000 audit: BPF prog-id=196 op=UNLOAD Jan 14 00:31:23.445000 audit[4600]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035333164666261363734663766653539306461366535376163663663 Jan 14 00:31:23.445000 audit: BPF prog-id=195 op=UNLOAD Jan 14 00:31:23.445000 audit[4600]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035333164666261363734663766653539306461366535376163663663 Jan 14 00:31:23.445000 audit: BPF prog-id=197 op=LOAD Jan 14 00:31:23.445000 audit[4600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4587 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035333164666261363734663766653539306461366535376163663663 Jan 14 00:31:23.493305 containerd[1612]: time="2026-01-14T00:31:23.492713444Z" level=info msg="connecting to shim ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59" address="unix:///run/containerd/s/23431f4cd69bc5067bc9e5bb5e8f91f4ae3cb6c01884c214ceac9772f649ec6f" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:31:23.562333 containerd[1612]: time="2026-01-14T00:31:23.561586431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nsbsf,Uid:67055487-2b15-4e2a-8975-7fee787b4309,Namespace:calico-system,Attempt:0,} returns sandbox id \"0531dfba674f7fe590da6e57acf6c49642384d7a4d13df38ab4b5736412bef0a\"" Jan 14 00:31:23.570952 containerd[1612]: time="2026-01-14T00:31:23.569329669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 00:31:23.593496 systemd[1]: Started cri-containerd-ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59.scope - libcontainer container ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59. Jan 14 00:31:23.609082 systemd-networkd[1497]: calic71946726ba: Gained IPv6LL Jan 14 00:31:23.660000 audit: BPF prog-id=198 op=LOAD Jan 14 00:31:23.664000 audit: BPF prog-id=199 op=LOAD Jan 14 00:31:23.664000 audit[4654]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4637 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365646136616335346465663739633232366539306266343339396537 Jan 14 00:31:23.664000 audit: BPF prog-id=199 op=UNLOAD Jan 14 00:31:23.664000 audit[4654]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4637 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365646136616335346465663739633232366539306266343339396537 Jan 14 00:31:23.665000 audit: BPF prog-id=200 op=LOAD Jan 14 00:31:23.665000 audit[4654]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4637 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365646136616335346465663739633232366539306266343339396537 Jan 14 00:31:23.665000 audit: BPF prog-id=201 op=LOAD Jan 14 00:31:23.665000 audit[4654]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4637 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365646136616335346465663739633232366539306266343339396537 Jan 14 00:31:23.666000 audit: BPF prog-id=201 op=UNLOAD Jan 14 00:31:23.666000 audit[4654]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4637 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365646136616335346465663739633232366539306266343339396537 Jan 14 00:31:23.666000 audit: BPF prog-id=200 op=UNLOAD Jan 14 00:31:23.666000 audit[4654]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4637 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365646136616335346465663739633232366539306266343339396537 Jan 14 00:31:23.666000 audit: BPF prog-id=202 op=LOAD Jan 14 00:31:23.666000 audit[4654]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4637 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:23.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365646136616335346465663739633232366539306266343339396537 Jan 14 00:31:23.756834 containerd[1612]: time="2026-01-14T00:31:23.756430295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b56c4d86d-wv9l2,Uid:e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ceda6ac54def79c226e90bf4399e7b3e5446273dc77e111a6e0d700b4447be59\"" Jan 14 00:31:23.910302 containerd[1612]: time="2026-01-14T00:31:23.909656185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:23.914373 containerd[1612]: time="2026-01-14T00:31:23.914264240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 00:31:23.914789 containerd[1612]: time="2026-01-14T00:31:23.914396039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:23.915299 kubelet[2886]: E0114 00:31:23.915102 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:31:23.915299 kubelet[2886]: E0114 00:31:23.915354 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:31:23.916756 kubelet[2886]: E0114 00:31:23.915627 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgztm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nsbsf_calico-system(67055487-2b15-4e2a-8975-7fee787b4309): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:23.917451 kubelet[2886]: E0114 00:31:23.917060 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nsbsf" podUID="67055487-2b15-4e2a-8975-7fee787b4309" Jan 14 00:31:23.917514 containerd[1612]: time="2026-01-14T00:31:23.917184024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:31:23.929247 systemd-networkd[1497]: cali5367e249f8c: Gained IPv6LL Jan 14 00:31:24.168605 kubelet[2886]: E0114 00:31:24.168063 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nsbsf" podUID="67055487-2b15-4e2a-8975-7fee787b4309" Jan 14 00:31:24.168605 kubelet[2886]: E0114 00:31:24.168062 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" podUID="900e30be-5423-4b2c-9623-a51920c0a748" Jan 14 00:31:24.272502 containerd[1612]: time="2026-01-14T00:31:24.272435057Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:24.278064 containerd[1612]: time="2026-01-14T00:31:24.277958108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:31:24.278504 containerd[1612]: time="2026-01-14T00:31:24.277988388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:24.278703 kubelet[2886]: E0114 00:31:24.278661 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:31:24.278785 kubelet[2886]: E0114 00:31:24.278716 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:31:24.280404 kubelet[2886]: E0114 00:31:24.280260 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b56c4d86d-wv9l2_calico-apiserver(e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:24.281836 kubelet[2886]: E0114 00:31:24.281463 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" podUID="e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a" Jan 14 00:31:24.450000 audit[4701]: NETFILTER_CFG table=filter:127 family=2 entries=16 op=nft_register_rule pid=4701 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:24.450000 audit[4701]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc5887470 a2=0 a3=1 items=0 ppid=3053 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:24.450000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:24.455000 audit[4701]: NETFILTER_CFG table=nat:128 family=2 entries=18 op=nft_register_rule pid=4701 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:24.455000 audit[4701]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5004 a0=3 a1=ffffc5887470 a2=0 a3=1 items=0 ppid=3053 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:24.455000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:24.633178 systemd-networkd[1497]: cali55a4c4421b2: Gained IPv6LL Jan 14 00:31:24.768039 containerd[1612]: time="2026-01-14T00:31:24.767105165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zw96t,Uid:0d97a284-7542-4401-aed1-52eb725b1c6d,Namespace:kube-system,Attempt:0,}" Jan 14 00:31:24.768039 containerd[1612]: time="2026-01-14T00:31:24.767199005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbc4f559-f6xkt,Uid:c604fd3c-83a2-496b-a7f9-8f8a03e00409,Namespace:calico-system,Attempt:0,}" Jan 14 00:31:25.005588 systemd-networkd[1497]: cali651230bc05f: Link UP Jan 14 00:31:25.009118 systemd-networkd[1497]: cali651230bc05f: Gained carrier Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.816 [INFO][4704] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.843 [INFO][4704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--zw96t-eth0 coredns-674b8bbfcf- kube-system 0d97a284-7542-4401-aed1-52eb725b1c6d 876 0 2026-01-14 00:30:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-n-a43761813d coredns-674b8bbfcf-zw96t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali651230bc05f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" Namespace="kube-system" Pod="coredns-674b8bbfcf-zw96t" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--zw96t-" Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.843 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" Namespace="kube-system" Pod="coredns-674b8bbfcf-zw96t" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--zw96t-eth0" Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.894 [INFO][4731] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" HandleID="k8s-pod-network.712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" Workload="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--zw96t-eth0" Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.894 [INFO][4731] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" HandleID="k8s-pod-network.712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" Workload="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--zw96t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003956b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-n-a43761813d", "pod":"coredns-674b8bbfcf-zw96t", "timestamp":"2026-01-14 00:31:24.894610092 +0000 UTC"}, Hostname:"ci-4547-0-0-n-a43761813d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.895 [INFO][4731] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.895 [INFO][4731] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.895 [INFO][4731] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-a43761813d' Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.913 [INFO][4731] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.925 [INFO][4731] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.935 [INFO][4731] ipam/ipam.go 511: Trying affinity for 192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.939 [INFO][4731] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.942 [INFO][4731] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.943 [INFO][4731] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.0/26 handle="k8s-pod-network.712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.945 [INFO][4731] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48 Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.959 [INFO][4731] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.0/26 handle="k8s-pod-network.712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.994 [INFO][4731] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.6/26] block=192.168.105.0/26 handle="k8s-pod-network.712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.994 [INFO][4731] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.6/26] handle="k8s-pod-network.712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.994 [INFO][4731] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:31:25.080525 containerd[1612]: 2026-01-14 00:31:24.994 [INFO][4731] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.6/26] IPv6=[] ContainerID="712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" HandleID="k8s-pod-network.712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" Workload="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--zw96t-eth0" Jan 14 00:31:25.082512 containerd[1612]: 2026-01-14 00:31:24.999 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" Namespace="kube-system" Pod="coredns-674b8bbfcf-zw96t" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--zw96t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--zw96t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0d97a284-7542-4401-aed1-52eb725b1c6d", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 30, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a43761813d", ContainerID:"", Pod:"coredns-674b8bbfcf-zw96t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali651230bc05f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:31:25.082512 containerd[1612]: 2026-01-14 00:31:24.999 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.6/32] ContainerID="712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" Namespace="kube-system" Pod="coredns-674b8bbfcf-zw96t" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--zw96t-eth0" Jan 14 00:31:25.082512 containerd[1612]: 2026-01-14 00:31:24.999 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali651230bc05f ContainerID="712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" Namespace="kube-system" Pod="coredns-674b8bbfcf-zw96t" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--zw96t-eth0" Jan 14 00:31:25.082512 containerd[1612]: 2026-01-14 00:31:25.014 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" Namespace="kube-system" Pod="coredns-674b8bbfcf-zw96t" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--zw96t-eth0" Jan 14 00:31:25.082512 containerd[1612]: 2026-01-14 00:31:25.017 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" Namespace="kube-system" Pod="coredns-674b8bbfcf-zw96t" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--zw96t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--zw96t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0d97a284-7542-4401-aed1-52eb725b1c6d", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 30, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a43761813d", ContainerID:"712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48", Pod:"coredns-674b8bbfcf-zw96t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali651230bc05f", MAC:"26:58:08:5b:7e:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:31:25.082512 containerd[1612]: 2026-01-14 00:31:25.077 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" Namespace="kube-system" Pod="coredns-674b8bbfcf-zw96t" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-coredns--674b8bbfcf--zw96t-eth0" Jan 14 00:31:25.146943 systemd-networkd[1497]: cali356715d1275: Gained IPv6LL Jan 14 00:31:25.167920 containerd[1612]: time="2026-01-14T00:31:25.160227511Z" level=info msg="connecting to shim 712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48" address="unix:///run/containerd/s/04785d3aeaedf629dcd532f9135b15d9dab0ca6b01b88f2a2382c5be982131cc" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:31:25.173270 kubelet[2886]: E0114 00:31:25.173192 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nsbsf" podUID="67055487-2b15-4e2a-8975-7fee787b4309" Jan 14 00:31:25.175180 kubelet[2886]: E0114 00:31:25.174203 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" podUID="e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a" Jan 14 00:31:25.199122 systemd-networkd[1497]: calic8d666687d7: Link UP Jan 14 00:31:25.212790 systemd-networkd[1497]: calic8d666687d7: Gained carrier Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:24.816 [INFO][4712] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:24.851 [INFO][4712] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--a43761813d-k8s-calico--kube--controllers--5cbc4f559--f6xkt-eth0 calico-kube-controllers-5cbc4f559- calico-system c604fd3c-83a2-496b-a7f9-8f8a03e00409 878 0 2026-01-14 00:30:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5cbc4f559 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547-0-0-n-a43761813d calico-kube-controllers-5cbc4f559-f6xkt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic8d666687d7 [] [] }} ContainerID="89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" Namespace="calico-system" Pod="calico-kube-controllers-5cbc4f559-f6xkt" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--kube--controllers--5cbc4f559--f6xkt-" Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:24.851 [INFO][4712] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" Namespace="calico-system" Pod="calico-kube-controllers-5cbc4f559-f6xkt" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--kube--controllers--5cbc4f559--f6xkt-eth0" Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:24.925 [INFO][4738] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" HandleID="k8s-pod-network.89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" Workload="ci--4547--0--0--n--a43761813d-k8s-calico--kube--controllers--5cbc4f559--f6xkt-eth0" Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:24.926 [INFO][4738] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" HandleID="k8s-pod-network.89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" Workload="ci--4547--0--0--n--a43761813d-k8s-calico--kube--controllers--5cbc4f559--f6xkt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-a43761813d", "pod":"calico-kube-controllers-5cbc4f559-f6xkt", "timestamp":"2026-01-14 00:31:24.925646248 +0000 UTC"}, Hostname:"ci-4547-0-0-n-a43761813d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:24.926 [INFO][4738] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:24.994 [INFO][4738] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:24.994 [INFO][4738] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-a43761813d' Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:25.025 [INFO][4738] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:25.084 [INFO][4738] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:25.106 [INFO][4738] ipam/ipam.go 511: Trying affinity for 192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:25.110 [INFO][4738] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:25.124 [INFO][4738] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:25.124 [INFO][4738] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.0/26 handle="k8s-pod-network.89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:25.128 [INFO][4738] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:25.141 [INFO][4738] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.0/26 handle="k8s-pod-network.89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:25.162 [INFO][4738] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.7/26] block=192.168.105.0/26 handle="k8s-pod-network.89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:25.162 [INFO][4738] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.7/26] handle="k8s-pod-network.89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:25.162 [INFO][4738] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:31:25.263104 containerd[1612]: 2026-01-14 00:31:25.163 [INFO][4738] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.7/26] IPv6=[] ContainerID="89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" HandleID="k8s-pod-network.89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" Workload="ci--4547--0--0--n--a43761813d-k8s-calico--kube--controllers--5cbc4f559--f6xkt-eth0" Jan 14 00:31:25.264574 containerd[1612]: 2026-01-14 00:31:25.184 [INFO][4712] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" Namespace="calico-system" Pod="calico-kube-controllers-5cbc4f559-f6xkt" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--kube--controllers--5cbc4f559--f6xkt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a43761813d-k8s-calico--kube--controllers--5cbc4f559--f6xkt-eth0", GenerateName:"calico-kube-controllers-5cbc4f559-", Namespace:"calico-system", SelfLink:"", UID:"c604fd3c-83a2-496b-a7f9-8f8a03e00409", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 30, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cbc4f559", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a43761813d", ContainerID:"", Pod:"calico-kube-controllers-5cbc4f559-f6xkt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic8d666687d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:31:25.264574 containerd[1612]: 2026-01-14 00:31:25.185 [INFO][4712] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.7/32] ContainerID="89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" Namespace="calico-system" Pod="calico-kube-controllers-5cbc4f559-f6xkt" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--kube--controllers--5cbc4f559--f6xkt-eth0" Jan 14 00:31:25.264574 containerd[1612]: 2026-01-14 00:31:25.185 [INFO][4712] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8d666687d7 ContainerID="89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" Namespace="calico-system" Pod="calico-kube-controllers-5cbc4f559-f6xkt" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--kube--controllers--5cbc4f559--f6xkt-eth0" Jan 14 00:31:25.264574 containerd[1612]: 2026-01-14 00:31:25.211 [INFO][4712] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" Namespace="calico-system" Pod="calico-kube-controllers-5cbc4f559-f6xkt" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--kube--controllers--5cbc4f559--f6xkt-eth0" Jan 14 00:31:25.264574 containerd[1612]: 2026-01-14 00:31:25.217 [INFO][4712] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" Namespace="calico-system" Pod="calico-kube-controllers-5cbc4f559-f6xkt" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--kube--controllers--5cbc4f559--f6xkt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a43761813d-k8s-calico--kube--controllers--5cbc4f559--f6xkt-eth0", GenerateName:"calico-kube-controllers-5cbc4f559-", Namespace:"calico-system", SelfLink:"", UID:"c604fd3c-83a2-496b-a7f9-8f8a03e00409", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 30, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cbc4f559", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a43761813d", ContainerID:"89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f", Pod:"calico-kube-controllers-5cbc4f559-f6xkt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic8d666687d7", MAC:"8e:e8:f6:94:ad:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:31:25.264574 containerd[1612]: 2026-01-14 00:31:25.252 [INFO][4712] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" Namespace="calico-system" Pod="calico-kube-controllers-5cbc4f559-f6xkt" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-calico--kube--controllers--5cbc4f559--f6xkt-eth0" Jan 14 00:31:25.266427 systemd[1]: Started cri-containerd-712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48.scope - libcontainer container 712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48. Jan 14 00:31:25.328774 containerd[1612]: time="2026-01-14T00:31:25.328579564Z" level=info msg="connecting to shim 89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f" address="unix:///run/containerd/s/dde70ec53a94348bf7dbbcd4bc02e35616da3ca73260dc39ee7cc600ba8822c5" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:31:25.342000 audit: BPF prog-id=203 op=LOAD Jan 14 00:31:25.346000 audit: BPF prog-id=204 op=LOAD Jan 14 00:31:25.346000 audit[4792]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4779 pid=4792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.346000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731326639616436613237366630373333646338633564363637353736 Jan 14 00:31:25.348000 audit: BPF prog-id=204 op=UNLOAD Jan 14 00:31:25.348000 audit[4792]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.348000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731326639616436613237366630373333646338633564363637353736 Jan 14 00:31:25.349000 audit: BPF prog-id=205 op=LOAD Jan 14 00:31:25.349000 audit[4792]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4779 pid=4792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731326639616436613237366630373333646338633564363637353736 Jan 14 00:31:25.350000 audit: BPF prog-id=206 op=LOAD Jan 14 00:31:25.350000 audit[4792]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4779 pid=4792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731326639616436613237366630373333646338633564363637353736 Jan 14 00:31:25.351000 audit: BPF prog-id=206 op=UNLOAD Jan 14 00:31:25.351000 audit[4792]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731326639616436613237366630373333646338633564363637353736 Jan 14 00:31:25.351000 audit: BPF prog-id=205 op=UNLOAD Jan 14 00:31:25.351000 audit[4792]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731326639616436613237366630373333646338633564363637353736 Jan 14 00:31:25.352000 audit: BPF prog-id=207 op=LOAD Jan 14 00:31:25.352000 audit[4792]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4779 pid=4792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731326639616436613237366630373333646338633564363637353736 Jan 14 00:31:25.377342 systemd[1]: Started cri-containerd-89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f.scope - libcontainer container 89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f. Jan 14 00:31:25.386000 audit[4846]: NETFILTER_CFG table=filter:129 family=2 entries=16 op=nft_register_rule pid=4846 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:25.386000 audit[4846]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcbd72d30 a2=0 a3=1 items=0 ppid=3053 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.386000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:25.397000 audit[4846]: NETFILTER_CFG table=nat:130 family=2 entries=18 op=nft_register_rule pid=4846 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:25.397000 audit[4846]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5004 a0=3 a1=ffffcbd72d30 a2=0 a3=1 items=0 ppid=3053 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.397000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:25.438416 containerd[1612]: time="2026-01-14T00:31:25.438358679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zw96t,Uid:0d97a284-7542-4401-aed1-52eb725b1c6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48\"" Jan 14 00:31:25.448707 containerd[1612]: time="2026-01-14T00:31:25.448613067Z" level=info msg="CreateContainer within sandbox \"712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 00:31:25.466000 audit: BPF prog-id=208 op=LOAD Jan 14 00:31:25.468000 audit: BPF prog-id=209 op=LOAD Jan 14 00:31:25.468000 audit[4834]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=4824 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839656235646536653566656265383836613634663233626363656263 Jan 14 00:31:25.468000 audit: BPF prog-id=209 op=UNLOAD Jan 14 00:31:25.468000 audit[4834]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4824 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839656235646536653566656265383836613634663233626363656263 Jan 14 00:31:25.468000 audit: BPF prog-id=210 op=LOAD Jan 14 00:31:25.468000 audit[4834]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4824 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839656235646536653566656265383836613634663233626363656263 Jan 14 00:31:25.468000 audit: BPF prog-id=211 op=LOAD Jan 14 00:31:25.468000 audit[4834]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4824 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839656235646536653566656265383836613634663233626363656263 Jan 14 00:31:25.469000 audit: BPF prog-id=211 op=UNLOAD Jan 14 00:31:25.469000 audit[4834]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4824 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839656235646536653566656265383836613634663233626363656263 Jan 14 00:31:25.469000 audit: BPF prog-id=210 op=UNLOAD Jan 14 00:31:25.469000 audit[4834]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4824 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839656235646536653566656265383836613634663233626363656263 Jan 14 00:31:25.469000 audit: BPF prog-id=212 op=LOAD Jan 14 00:31:25.469000 audit[4834]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4824 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839656235646536653566656265383836613634663233626363656263 Jan 14 00:31:25.481876 containerd[1612]: time="2026-01-14T00:31:25.481716696Z" level=info msg="Container 5c3ed71c66d7acaef7273fdfee4d618a55eb1ad1eba64f1fec1c4a02f6040e4f: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:31:25.512799 containerd[1612]: time="2026-01-14T00:31:25.512710817Z" level=info msg="CreateContainer within sandbox \"712f9ad6a276f0733dc8c5d66757639b026a77ba47d865130916e87d944ddc48\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c3ed71c66d7acaef7273fdfee4d618a55eb1ad1eba64f1fec1c4a02f6040e4f\"" Jan 14 00:31:25.515586 containerd[1612]: time="2026-01-14T00:31:25.515481042Z" level=info msg="StartContainer for \"5c3ed71c66d7acaef7273fdfee4d618a55eb1ad1eba64f1fec1c4a02f6040e4f\"" Jan 14 00:31:25.521698 containerd[1612]: time="2026-01-14T00:31:25.521640171Z" level=info msg="connecting to shim 5c3ed71c66d7acaef7273fdfee4d618a55eb1ad1eba64f1fec1c4a02f6040e4f" address="unix:///run/containerd/s/04785d3aeaedf629dcd532f9135b15d9dab0ca6b01b88f2a2382c5be982131cc" protocol=ttrpc version=3 Jan 14 00:31:25.552118 systemd[1]: Started cri-containerd-5c3ed71c66d7acaef7273fdfee4d618a55eb1ad1eba64f1fec1c4a02f6040e4f.scope - libcontainer container 5c3ed71c66d7acaef7273fdfee4d618a55eb1ad1eba64f1fec1c4a02f6040e4f. Jan 14 00:31:25.563855 containerd[1612]: time="2026-01-14T00:31:25.562876959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbc4f559-f6xkt,Uid:c604fd3c-83a2-496b-a7f9-8f8a03e00409,Namespace:calico-system,Attempt:0,} returns sandbox id \"89eb5de6e5febe886a64f23bccebc8bf157afb132770d157ed99762cfa28ed0f\"" Jan 14 00:31:25.569058 containerd[1612]: time="2026-01-14T00:31:25.568961007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 00:31:25.574000 audit: BPF prog-id=213 op=LOAD Jan 14 00:31:25.577000 audit: BPF prog-id=214 op=LOAD Jan 14 00:31:25.577000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4779 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.577000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563336564373163363664376163616566373237336664666565346436 Jan 14 00:31:25.578000 audit: BPF prog-id=214 op=UNLOAD Jan 14 00:31:25.578000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563336564373163363664376163616566373237336664666565346436 Jan 14 00:31:25.578000 audit: BPF prog-id=215 op=LOAD Jan 14 00:31:25.578000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4779 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563336564373163363664376163616566373237336664666565346436 Jan 14 00:31:25.579000 audit: BPF prog-id=216 op=LOAD Jan 14 00:31:25.579000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4779 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563336564373163363664376163616566373237336664666565346436 Jan 14 00:31:25.579000 audit: BPF prog-id=216 op=UNLOAD Jan 14 00:31:25.579000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563336564373163363664376163616566373237336664666565346436 Jan 14 00:31:25.579000 audit: BPF prog-id=215 op=UNLOAD Jan 14 00:31:25.579000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563336564373163363664376163616566373237336664666565346436 Jan 14 00:31:25.579000 audit: BPF prog-id=217 op=LOAD Jan 14 00:31:25.579000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4779 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:25.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563336564373163363664376163616566373237336664666565346436 Jan 14 00:31:25.622112 containerd[1612]: time="2026-01-14T00:31:25.621687696Z" level=info msg="StartContainer for \"5c3ed71c66d7acaef7273fdfee4d618a55eb1ad1eba64f1fec1c4a02f6040e4f\" returns successfully" Jan 14 00:31:25.876185 sshd[4571]: Invalid user baikal from 5.187.35.21 port 36428 Jan 14 00:31:25.911067 containerd[1612]: time="2026-01-14T00:31:25.910942647Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:25.912679 containerd[1612]: time="2026-01-14T00:31:25.912580279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 00:31:25.912799 containerd[1612]: time="2026-01-14T00:31:25.912720638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:25.913185 kubelet[2886]: E0114 00:31:25.913111 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:31:25.913286 kubelet[2886]: E0114 00:31:25.913189 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:31:25.913474 kubelet[2886]: E0114 00:31:25.913383 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xsckj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5cbc4f559-f6xkt_calico-system(c604fd3c-83a2-496b-a7f9-8f8a03e00409): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:25.915115 kubelet[2886]: E0114 00:31:25.915006 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" podUID="c604fd3c-83a2-496b-a7f9-8f8a03e00409" Jan 14 00:31:26.177882 kubelet[2886]: E0114 00:31:26.177243 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" podUID="c604fd3c-83a2-496b-a7f9-8f8a03e00409" Jan 14 00:31:26.425000 audit[4919]: NETFILTER_CFG table=filter:131 family=2 entries=16 op=nft_register_rule pid=4919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:26.425000 audit[4919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffdf4fee60 a2=0 a3=1 items=0 ppid=3053 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:26.425000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:26.440000 audit[4919]: NETFILTER_CFG table=nat:132 family=2 entries=54 op=nft_register_chain pid=4919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:26.440000 audit[4919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19092 a0=3 a1=ffffdf4fee60 a2=0 a3=1 items=0 ppid=3053 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:26.440000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:26.604363 sshd[4571]: Connection closed by invalid user baikal 5.187.35.21 port 36428 [preauth] Jan 14 00:31:26.603000 audit[4571]: USER_ERR pid=4571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=5.187.35.21 addr=5.187.35.21 terminal=ssh res=failed' Jan 14 00:31:26.607609 systemd[1]: sshd@33-91.99.0.249:22-5.187.35.21:36428.service: Deactivated successfully. Jan 14 00:31:26.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-91.99.0.249:22-5.187.35.21:36428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:31:26.681269 systemd-networkd[1497]: calic8d666687d7: Gained IPv6LL Jan 14 00:31:26.769641 containerd[1612]: time="2026-01-14T00:31:26.768743652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8rkfb,Uid:c4ec9a31-66c9-4bf7-a831-6c170af7211c,Namespace:calico-system,Attempt:0,}" Jan 14 00:31:26.810117 systemd-networkd[1497]: cali651230bc05f: Gained IPv6LL Jan 14 00:31:26.974051 systemd-networkd[1497]: cali99653a857bb: Link UP Jan 14 00:31:26.976973 systemd-networkd[1497]: cali99653a857bb: Gained carrier Jan 14 00:31:27.003657 kubelet[2886]: I0114 00:31:27.003486 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zw96t" podStartSLOduration=52.003464915 podStartE2EDuration="52.003464915s" podCreationTimestamp="2026-01-14 00:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:31:26.225208899 +0000 UTC m=+56.621304734" watchObservedRunningTime="2026-01-14 00:31:27.003464915 +0000 UTC m=+57.399560710" Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.838 [INFO][4926] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.863 [INFO][4926] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--a43761813d-k8s-csi--node--driver--8rkfb-eth0 csi-node-driver- calico-system c4ec9a31-66c9-4bf7-a831-6c170af7211c 775 0 2026-01-14 00:30:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547-0-0-n-a43761813d csi-node-driver-8rkfb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali99653a857bb [] [] }} ContainerID="1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" Namespace="calico-system" Pod="csi-node-driver-8rkfb" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-csi--node--driver--8rkfb-" Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.863 [INFO][4926] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" Namespace="calico-system" Pod="csi-node-driver-8rkfb" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-csi--node--driver--8rkfb-eth0" Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.901 [INFO][4938] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" HandleID="k8s-pod-network.1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" Workload="ci--4547--0--0--n--a43761813d-k8s-csi--node--driver--8rkfb-eth0" Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.901 [INFO][4938] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" HandleID="k8s-pod-network.1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" Workload="ci--4547--0--0--n--a43761813d-k8s-csi--node--driver--8rkfb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c0fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-a43761813d", "pod":"csi-node-driver-8rkfb", "timestamp":"2026-01-14 00:31:26.901497226 +0000 UTC"}, Hostname:"ci-4547-0-0-n-a43761813d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.901 [INFO][4938] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.901 [INFO][4938] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.901 [INFO][4938] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-a43761813d' Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.914 [INFO][4938] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.922 [INFO][4938] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.929 [INFO][4938] ipam/ipam.go 511: Trying affinity for 192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.932 [INFO][4938] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.937 [INFO][4938] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.0/26 host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.938 [INFO][4938] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.0/26 handle="k8s-pod-network.1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.941 [INFO][4938] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1 Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.948 [INFO][4938] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.0/26 handle="k8s-pod-network.1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.963 [INFO][4938] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.8/26] block=192.168.105.0/26 handle="k8s-pod-network.1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.963 [INFO][4938] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.8/26] handle="k8s-pod-network.1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" host="ci-4547-0-0-n-a43761813d" Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.963 [INFO][4938] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:31:27.009733 containerd[1612]: 2026-01-14 00:31:26.963 [INFO][4938] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.8/26] IPv6=[] ContainerID="1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" HandleID="k8s-pod-network.1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" Workload="ci--4547--0--0--n--a43761813d-k8s-csi--node--driver--8rkfb-eth0" Jan 14 00:31:27.010409 containerd[1612]: 2026-01-14 00:31:26.968 [INFO][4926] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" Namespace="calico-system" Pod="csi-node-driver-8rkfb" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-csi--node--driver--8rkfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a43761813d-k8s-csi--node--driver--8rkfb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c4ec9a31-66c9-4bf7-a831-6c170af7211c", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 30, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a43761813d", ContainerID:"", Pod:"csi-node-driver-8rkfb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99653a857bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:31:27.010409 containerd[1612]: 2026-01-14 00:31:26.968 [INFO][4926] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.8/32] ContainerID="1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" Namespace="calico-system" Pod="csi-node-driver-8rkfb" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-csi--node--driver--8rkfb-eth0" Jan 14 00:31:27.010409 containerd[1612]: 2026-01-14 00:31:26.968 [INFO][4926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99653a857bb ContainerID="1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" Namespace="calico-system" Pod="csi-node-driver-8rkfb" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-csi--node--driver--8rkfb-eth0" Jan 14 00:31:27.010409 containerd[1612]: 2026-01-14 00:31:26.979 [INFO][4926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" Namespace="calico-system" Pod="csi-node-driver-8rkfb" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-csi--node--driver--8rkfb-eth0" Jan 14 00:31:27.010409 containerd[1612]: 2026-01-14 00:31:26.981 [INFO][4926] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" Namespace="calico-system" Pod="csi-node-driver-8rkfb" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-csi--node--driver--8rkfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a43761813d-k8s-csi--node--driver--8rkfb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c4ec9a31-66c9-4bf7-a831-6c170af7211c", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 30, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a43761813d", ContainerID:"1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1", Pod:"csi-node-driver-8rkfb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99653a857bb", MAC:"ee:21:43:9e:41:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:31:27.010409 containerd[1612]: 2026-01-14 00:31:27.006 [INFO][4926] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" Namespace="calico-system" Pod="csi-node-driver-8rkfb" WorkloadEndpoint="ci--4547--0--0--n--a43761813d-k8s-csi--node--driver--8rkfb-eth0" Jan 14 00:31:27.060130 containerd[1612]: time="2026-01-14T00:31:27.059848359Z" level=info msg="connecting to shim 1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1" address="unix:///run/containerd/s/00a58561870ad741102d294f2a57483ab8d7f289738a1105bbb93d5d20d4da21" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:31:27.101120 systemd[1]: Started cri-containerd-1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1.scope - libcontainer container 1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1. Jan 14 00:31:27.114000 audit: BPF prog-id=218 op=LOAD Jan 14 00:31:27.115000 audit: BPF prog-id=219 op=LOAD Jan 14 00:31:27.115000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4958 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:27.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162643330366464383365333861363736313634363734663362646634 Jan 14 00:31:27.116000 audit: BPF prog-id=219 op=UNLOAD Jan 14 00:31:27.116000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4958 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:27.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162643330366464383365333861363736313634363734663362646634 Jan 14 00:31:27.116000 audit: BPF prog-id=220 op=LOAD Jan 14 00:31:27.116000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4958 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:27.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162643330366464383365333861363736313634363734663362646634 Jan 14 00:31:27.116000 audit: BPF prog-id=221 op=LOAD Jan 14 00:31:27.116000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4958 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:27.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162643330366464383365333861363736313634363734663362646634 Jan 14 00:31:27.116000 audit: BPF prog-id=221 op=UNLOAD Jan 14 00:31:27.116000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4958 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:27.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162643330366464383365333861363736313634363734663362646634 Jan 14 00:31:27.116000 audit: BPF prog-id=220 op=UNLOAD Jan 14 00:31:27.116000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4958 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:27.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162643330366464383365333861363736313634363734663362646634 Jan 14 00:31:27.116000 audit: BPF prog-id=222 op=LOAD Jan 14 00:31:27.116000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4958 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:27.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162643330366464383365333861363736313634363734663362646634 Jan 14 00:31:27.146342 containerd[1612]: time="2026-01-14T00:31:27.146280456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8rkfb,Uid:c4ec9a31-66c9-4bf7-a831-6c170af7211c,Namespace:calico-system,Attempt:0,} returns sandbox id \"1bd306dd83e38a676164674f3bdf4deec3a6d2d861f66b76ba725cf8236ea8b1\"" Jan 14 00:31:27.157777 containerd[1612]: time="2026-01-14T00:31:27.157660320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 00:31:27.189454 kubelet[2886]: E0114 00:31:27.189388 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" podUID="c604fd3c-83a2-496b-a7f9-8f8a03e00409" Jan 14 00:31:27.499276 containerd[1612]: time="2026-01-14T00:31:27.499100450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:27.502372 containerd[1612]: time="2026-01-14T00:31:27.502284755Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 00:31:27.502597 containerd[1612]: time="2026-01-14T00:31:27.502318834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:27.503071 kubelet[2886]: E0114 00:31:27.502967 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:31:27.503307 kubelet[2886]: E0114 00:31:27.503179 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:31:27.504407 kubelet[2886]: E0114 00:31:27.504339 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv2gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rkfb_calico-system(c4ec9a31-66c9-4bf7-a831-6c170af7211c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:27.508630 containerd[1612]: time="2026-01-14T00:31:27.508547244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 00:31:27.840287 containerd[1612]: time="2026-01-14T00:31:27.840201942Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:27.841929 containerd[1612]: time="2026-01-14T00:31:27.841867333Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 00:31:27.842182 containerd[1612]: time="2026-01-14T00:31:27.841897373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:27.842514 kubelet[2886]: E0114 00:31:27.842425 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:31:27.842768 kubelet[2886]: E0114 00:31:27.842698 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:31:27.843363 kubelet[2886]: E0114 00:31:27.843269 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv2gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rkfb_calico-system(c4ec9a31-66c9-4bf7-a831-6c170af7211c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:27.844701 kubelet[2886]: E0114 00:31:27.844584 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:31:28.194478 kubelet[2886]: E0114 00:31:28.193936 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:31:28.601182 systemd-networkd[1497]: cali99653a857bb: Gained IPv6LL Jan 14 00:31:29.361515 kubelet[2886]: I0114 00:31:29.361454 2886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 00:31:29.438000 audit[5037]: NETFILTER_CFG table=filter:133 family=2 entries=15 op=nft_register_rule pid=5037 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:29.440397 kernel: kauditd_printk_skb: 179 callbacks suppressed Jan 14 00:31:29.440496 kernel: audit: type=1325 audit(1768350689.438:743): table=filter:133 family=2 entries=15 op=nft_register_rule pid=5037 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:29.438000 audit[5037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcb1608f0 a2=0 a3=1 items=0 ppid=3053 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:29.445254 kernel: audit: type=1300 audit(1768350689.438:743): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcb1608f0 a2=0 a3=1 items=0 ppid=3053 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:29.438000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:29.447531 kernel: audit: type=1327 audit(1768350689.438:743): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:29.449000 audit[5037]: NETFILTER_CFG table=nat:134 family=2 entries=25 op=nft_register_chain pid=5037 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:29.449000 audit[5037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8580 a0=3 a1=ffffcb1608f0 a2=0 a3=1 items=0 ppid=3053 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:29.454178 kernel: audit: type=1325 audit(1768350689.449:744): table=nat:134 family=2 entries=25 op=nft_register_chain pid=5037 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:31:29.454303 kernel: audit: type=1300 audit(1768350689.449:744): arch=c00000b7 syscall=211 success=yes exit=8580 a0=3 a1=ffffcb1608f0 a2=0 a3=1 items=0 ppid=3053 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:29.454340 kernel: audit: type=1327 audit(1768350689.449:744): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:29.449000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:31:30.301000 audit: BPF prog-id=223 op=LOAD Jan 14 00:31:30.301000 audit[5094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd2aa8668 a2=98 a3=ffffd2aa8658 items=0 ppid=5054 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.305666 kernel: audit: type=1334 audit(1768350690.301:745): prog-id=223 op=LOAD Jan 14 00:31:30.308347 kernel: audit: type=1300 audit(1768350690.301:745): arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd2aa8668 a2=98 a3=ffffd2aa8658 items=0 ppid=5054 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.308404 kernel: audit: type=1327 audit(1768350690.301:745): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:31:30.308453 kernel: audit: type=1334 audit(1768350690.303:746): prog-id=223 op=UNLOAD Jan 14 00:31:30.301000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:31:30.303000 audit: BPF prog-id=223 op=UNLOAD Jan 14 00:31:30.303000 audit[5094]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd2aa8638 a3=0 items=0 ppid=5054 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.303000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:31:30.303000 audit: BPF prog-id=224 op=LOAD Jan 14 00:31:30.303000 audit[5094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd2aa8518 a2=74 a3=95 items=0 ppid=5054 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.303000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:31:30.307000 audit: BPF prog-id=224 op=UNLOAD Jan 14 00:31:30.307000 audit[5094]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5054 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.307000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:31:30.307000 audit: BPF prog-id=225 op=LOAD Jan 14 00:31:30.307000 audit[5094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd2aa8548 a2=40 a3=ffffd2aa8578 items=0 ppid=5054 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.307000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:31:30.308000 audit: BPF prog-id=225 op=UNLOAD Jan 14 00:31:30.308000 audit[5094]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd2aa8578 items=0 ppid=5054 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.308000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:31:30.310000 audit: BPF prog-id=226 op=LOAD Jan 14 00:31:30.310000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdcf90f38 a2=98 a3=ffffdcf90f28 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.310000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.311000 audit: BPF prog-id=226 op=UNLOAD Jan 14 00:31:30.311000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdcf90f08 a3=0 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.311000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.311000 audit: BPF prog-id=227 op=LOAD Jan 14 00:31:30.311000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdcf90bc8 a2=74 a3=95 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.311000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.312000 audit: BPF prog-id=227 op=UNLOAD Jan 14 00:31:30.312000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.312000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.312000 audit: BPF prog-id=228 op=LOAD Jan 14 00:31:30.312000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdcf90c28 a2=94 a3=2 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.312000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.313000 audit: BPF prog-id=228 op=UNLOAD Jan 14 00:31:30.313000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.313000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.431000 audit: BPF prog-id=229 op=LOAD Jan 14 00:31:30.431000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdcf90be8 a2=40 a3=ffffdcf90c18 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.431000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.432000 audit: BPF prog-id=229 op=UNLOAD Jan 14 00:31:30.432000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffdcf90c18 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.432000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.442000 audit: BPF prog-id=230 op=LOAD Jan 14 00:31:30.442000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdcf90bf8 a2=94 a3=4 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.442000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.442000 audit: BPF prog-id=230 op=UNLOAD Jan 14 00:31:30.442000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.442000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.443000 audit: BPF prog-id=231 op=LOAD Jan 14 00:31:30.443000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffdcf90a38 a2=94 a3=5 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.443000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.443000 audit: BPF prog-id=231 op=UNLOAD Jan 14 00:31:30.443000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.443000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.443000 audit: BPF prog-id=232 op=LOAD Jan 14 00:31:30.443000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdcf90c68 a2=94 a3=6 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.443000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.443000 audit: BPF prog-id=232 op=UNLOAD Jan 14 00:31:30.443000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.443000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.443000 audit: BPF prog-id=233 op=LOAD Jan 14 00:31:30.443000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdcf90438 a2=94 a3=83 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.443000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.444000 audit: BPF prog-id=234 op=LOAD Jan 14 00:31:30.444000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffdcf901f8 a2=94 a3=2 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.444000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.445000 audit: BPF prog-id=234 op=UNLOAD Jan 14 00:31:30.445000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.445000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.446000 audit: BPF prog-id=233 op=UNLOAD Jan 14 00:31:30.446000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=14be6620 a3=14bd9b00 items=0 ppid=5054 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.446000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:31:30.468000 audit: BPF prog-id=235 op=LOAD Jan 14 00:31:30.468000 audit[5098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc61e7c78 a2=98 a3=ffffc61e7c68 items=0 ppid=5054 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.468000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:31:30.468000 audit: BPF prog-id=235 op=UNLOAD Jan 14 00:31:30.468000 audit[5098]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc61e7c48 a3=0 items=0 ppid=5054 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.468000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:31:30.468000 audit: BPF prog-id=236 op=LOAD Jan 14 00:31:30.468000 audit[5098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc61e7b28 a2=74 a3=95 items=0 ppid=5054 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.468000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:31:30.468000 audit: BPF prog-id=236 op=UNLOAD Jan 14 00:31:30.468000 audit[5098]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5054 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.468000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:31:30.468000 audit: BPF prog-id=237 op=LOAD Jan 14 00:31:30.468000 audit[5098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc61e7b58 a2=40 a3=ffffc61e7b88 items=0 ppid=5054 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.468000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:31:30.468000 audit: BPF prog-id=237 op=UNLOAD Jan 14 00:31:30.468000 audit[5098]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffc61e7b88 items=0 ppid=5054 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.468000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:31:30.584099 systemd-networkd[1497]: vxlan.calico: Link UP Jan 14 00:31:30.584108 systemd-networkd[1497]: vxlan.calico: Gained carrier Jan 14 00:31:30.630000 audit: BPF prog-id=238 op=LOAD Jan 14 00:31:30.630000 audit[5127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd86138a8 a2=98 a3=ffffd8613898 items=0 ppid=5054 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.630000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:31:30.630000 audit: BPF prog-id=238 op=UNLOAD Jan 14 00:31:30.630000 audit[5127]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd8613878 a3=0 items=0 ppid=5054 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.630000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:31:30.631000 audit: BPF prog-id=239 op=LOAD Jan 14 00:31:30.631000 audit[5127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd8613588 a2=74 a3=95 items=0 ppid=5054 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.631000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:31:30.631000 audit: BPF prog-id=239 op=UNLOAD Jan 14 00:31:30.631000 audit[5127]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5054 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.631000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:31:30.631000 audit: BPF prog-id=240 op=LOAD Jan 14 00:31:30.631000 audit[5127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd86135e8 a2=94 a3=2 items=0 ppid=5054 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.631000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:31:30.631000 audit: BPF prog-id=240 op=UNLOAD Jan 14 00:31:30.631000 audit[5127]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=5054 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.631000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:31:30.631000 audit: BPF prog-id=241 op=LOAD Jan 14 00:31:30.631000 audit[5127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd8613468 a2=40 a3=ffffd8613498 items=0 ppid=5054 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.631000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:31:30.631000 audit: BPF prog-id=241 op=UNLOAD Jan 14 00:31:30.631000 audit[5127]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffd8613498 items=0 ppid=5054 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.631000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:31:30.631000 audit: BPF prog-id=242 op=LOAD Jan 14 00:31:30.631000 audit[5127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd86135b8 a2=94 a3=b7 items=0 ppid=5054 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.631000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:31:30.631000 audit: BPF prog-id=242 op=UNLOAD Jan 14 00:31:30.631000 audit[5127]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=5054 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.631000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:31:30.632000 audit: BPF prog-id=243 op=LOAD Jan 14 00:31:30.632000 audit[5127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd8612c68 a2=94 a3=2 items=0 ppid=5054 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.632000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:31:30.632000 audit: BPF prog-id=243 op=UNLOAD Jan 14 00:31:30.632000 audit[5127]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=5054 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.632000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:31:30.632000 audit: BPF prog-id=244 op=LOAD Jan 14 00:31:30.632000 audit[5127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd8612df8 a2=94 a3=30 items=0 ppid=5054 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.632000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:31:30.651000 audit: BPF prog-id=245 op=LOAD Jan 14 00:31:30.651000 audit[5129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff9ae4e58 a2=98 a3=fffff9ae4e48 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.651000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.652000 audit: BPF prog-id=245 op=UNLOAD Jan 14 00:31:30.652000 audit[5129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff9ae4e28 a3=0 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.652000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.652000 audit: BPF prog-id=246 op=LOAD Jan 14 00:31:30.652000 audit[5129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff9ae4ae8 a2=74 a3=95 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.652000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.653000 audit: BPF prog-id=246 op=UNLOAD Jan 14 00:31:30.653000 audit[5129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.653000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.653000 audit: BPF prog-id=247 op=LOAD Jan 14 00:31:30.653000 audit[5129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff9ae4b48 a2=94 a3=2 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.653000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.653000 audit: BPF prog-id=247 op=UNLOAD Jan 14 00:31:30.653000 audit[5129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.653000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.785000 audit: BPF prog-id=248 op=LOAD Jan 14 00:31:30.785000 audit[5129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff9ae4b08 a2=40 a3=fffff9ae4b38 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.785000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.785000 audit: BPF prog-id=248 op=UNLOAD Jan 14 00:31:30.785000 audit[5129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffff9ae4b38 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.785000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.797000 audit: BPF prog-id=249 op=LOAD Jan 14 00:31:30.797000 audit[5129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff9ae4b18 a2=94 a3=4 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.797000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.798000 audit: BPF prog-id=249 op=UNLOAD Jan 14 00:31:30.798000 audit[5129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.798000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.798000 audit: BPF prog-id=250 op=LOAD Jan 14 00:31:30.798000 audit[5129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff9ae4958 a2=94 a3=5 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.798000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.798000 audit: BPF prog-id=250 op=UNLOAD Jan 14 00:31:30.798000 audit[5129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.798000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.798000 audit: BPF prog-id=251 op=LOAD Jan 14 00:31:30.798000 audit[5129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff9ae4b88 a2=94 a3=6 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.798000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.798000 audit: BPF prog-id=251 op=UNLOAD Jan 14 00:31:30.798000 audit[5129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.798000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.798000 audit: BPF prog-id=252 op=LOAD Jan 14 00:31:30.798000 audit[5129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff9ae4358 a2=94 a3=83 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.798000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.799000 audit: BPF prog-id=253 op=LOAD Jan 14 00:31:30.799000 audit[5129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffff9ae4118 a2=94 a3=2 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.799000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.799000 audit: BPF prog-id=253 op=UNLOAD Jan 14 00:31:30.799000 audit[5129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.799000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.799000 audit: BPF prog-id=252 op=UNLOAD Jan 14 00:31:30.799000 audit[5129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3ed3d620 a3=3ed30b00 items=0 ppid=5054 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.799000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:31:30.806000 audit: BPF prog-id=244 op=UNLOAD Jan 14 00:31:30.806000 audit[5054]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=400081c100 a2=0 a3=0 items=0 ppid=4212 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.806000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 00:31:30.954000 audit[5163]: NETFILTER_CFG table=mangle:135 family=2 entries=16 op=nft_register_chain pid=5163 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:31:30.954000 audit[5163]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffff8652f70 a2=0 a3=ffff95ac5fa8 items=0 ppid=5054 pid=5163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.954000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:31:30.963000 audit[5164]: NETFILTER_CFG table=nat:136 family=2 entries=15 op=nft_register_chain pid=5164 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:31:30.963000 audit[5164]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffd755b7e0 a2=0 a3=ffff8eb83fa8 items=0 ppid=5054 pid=5164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.963000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:31:30.970000 audit[5161]: NETFILTER_CFG table=raw:137 family=2 entries=21 op=nft_register_chain pid=5161 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:31:30.970000 audit[5161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffd4739280 a2=0 a3=ffff89cb1fa8 items=0 ppid=5054 pid=5161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.970000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:31:30.971000 audit[5162]: NETFILTER_CFG table=filter:138 family=2 entries=327 op=nft_register_chain pid=5162 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:31:30.971000 audit[5162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=193468 a0=3 a1=ffffc52cd380 a2=0 a3=ffffb6d36fa8 items=0 ppid=5054 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:31:30.971000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:31:32.446128 systemd-networkd[1497]: vxlan.calico: Gained IPv6LL Jan 14 00:31:34.771603 containerd[1612]: time="2026-01-14T00:31:34.769608100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 00:31:35.118787 containerd[1612]: time="2026-01-14T00:31:35.118617914Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:35.120739 containerd[1612]: time="2026-01-14T00:31:35.120604266Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 00:31:35.122672 containerd[1612]: time="2026-01-14T00:31:35.121063984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:35.122838 kubelet[2886]: E0114 00:31:35.121421 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:31:35.122838 kubelet[2886]: E0114 00:31:35.121557 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:31:35.123292 kubelet[2886]: E0114 00:31:35.122547 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:512ab7b5f96042dca2636ff671657425,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zsdtd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d779df8f-4d6qc_calico-system(ef34e9a3-c694-439f-b718-c0f1c8545835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:35.127141 containerd[1612]: time="2026-01-14T00:31:35.126846601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 00:31:35.458893 containerd[1612]: time="2026-01-14T00:31:35.458703587Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:35.462918 containerd[1612]: time="2026-01-14T00:31:35.462419412Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 00:31:35.463150 containerd[1612]: time="2026-01-14T00:31:35.462616931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:35.463450 kubelet[2886]: E0114 00:31:35.463406 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:31:35.463901 kubelet[2886]: E0114 00:31:35.463631 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:31:35.464108 kubelet[2886]: E0114 00:31:35.464049 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsdtd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d779df8f-4d6qc_calico-system(ef34e9a3-c694-439f-b718-c0f1c8545835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:35.465626 kubelet[2886]: E0114 00:31:35.465361 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d779df8f-4d6qc" podUID="ef34e9a3-c694-439f-b718-c0f1c8545835" Jan 14 00:31:37.770121 containerd[1612]: time="2026-01-14T00:31:37.769759092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:31:38.117874 containerd[1612]: time="2026-01-14T00:31:38.117520888Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:38.121096 containerd[1612]: time="2026-01-14T00:31:38.120482837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:31:38.121096 containerd[1612]: time="2026-01-14T00:31:38.120920756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:38.121704 kubelet[2886]: E0114 00:31:38.121485 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:31:38.121704 kubelet[2886]: E0114 00:31:38.121562 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:31:38.123850 kubelet[2886]: E0114 00:31:38.121991 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvhph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b56c4d86d-jdxwl_calico-apiserver(900e30be-5423-4b2c-9623-a51920c0a748): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:38.124073 kubelet[2886]: E0114 00:31:38.123990 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" podUID="900e30be-5423-4b2c-9623-a51920c0a748" Jan 14 00:31:38.124263 containerd[1612]: time="2026-01-14T00:31:38.124207263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 00:31:38.460071 containerd[1612]: time="2026-01-14T00:31:38.459620687Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:38.463694 containerd[1612]: time="2026-01-14T00:31:38.463622672Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 00:31:38.465075 containerd[1612]: time="2026-01-14T00:31:38.463853511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:38.466492 kubelet[2886]: E0114 00:31:38.465493 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:31:38.466492 kubelet[2886]: E0114 00:31:38.465549 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:31:38.466492 kubelet[2886]: E0114 00:31:38.465767 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgztm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nsbsf_calico-system(67055487-2b15-4e2a-8975-7fee787b4309): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:38.468304 kubelet[2886]: E0114 00:31:38.468012 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nsbsf" podUID="67055487-2b15-4e2a-8975-7fee787b4309" Jan 14 00:31:38.770221 containerd[1612]: time="2026-01-14T00:31:38.770080763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 00:31:39.121678 containerd[1612]: time="2026-01-14T00:31:39.121005379Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:39.124148 containerd[1612]: time="2026-01-14T00:31:39.124026728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 00:31:39.124777 containerd[1612]: time="2026-01-14T00:31:39.124205367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:39.125487 kubelet[2886]: E0114 00:31:39.125428 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:31:39.127769 kubelet[2886]: E0114 00:31:39.126126 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:31:39.127907 kubelet[2886]: E0114 00:31:39.126381 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xsckj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5cbc4f559-f6xkt_calico-system(c604fd3c-83a2-496b-a7f9-8f8a03e00409): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:39.130246 kubelet[2886]: E0114 00:31:39.129665 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" podUID="c604fd3c-83a2-496b-a7f9-8f8a03e00409" Jan 14 00:31:39.771249 containerd[1612]: time="2026-01-14T00:31:39.770784520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:31:40.141299 containerd[1612]: time="2026-01-14T00:31:40.140913456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:40.144875 containerd[1612]: time="2026-01-14T00:31:40.144665563Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:31:40.144875 containerd[1612]: time="2026-01-14T00:31:40.144794883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:40.145149 kubelet[2886]: E0114 00:31:40.145081 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:31:40.146019 kubelet[2886]: E0114 00:31:40.145159 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:31:40.146019 kubelet[2886]: E0114 00:31:40.145401 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b56c4d86d-wv9l2_calico-apiserver(e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:40.146738 kubelet[2886]: E0114 00:31:40.146675 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" podUID="e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a" Jan 14 00:31:42.771641 containerd[1612]: time="2026-01-14T00:31:42.770225490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 00:31:43.127875 containerd[1612]: time="2026-01-14T00:31:43.127558677Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:43.129889 containerd[1612]: time="2026-01-14T00:31:43.129703510Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 00:31:43.130132 containerd[1612]: time="2026-01-14T00:31:43.130075069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:43.133260 kubelet[2886]: E0114 00:31:43.130404 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:31:43.133260 kubelet[2886]: E0114 00:31:43.130460 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:31:43.133260 kubelet[2886]: E0114 00:31:43.130643 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv2gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rkfb_calico-system(c4ec9a31-66c9-4bf7-a831-6c170af7211c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:43.137780 containerd[1612]: time="2026-01-14T00:31:43.136238528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 00:31:43.478589 containerd[1612]: time="2026-01-14T00:31:43.478011905Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:43.486466 containerd[1612]: time="2026-01-14T00:31:43.486388317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 00:31:43.486866 containerd[1612]: time="2026-01-14T00:31:43.486768596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:43.487478 kubelet[2886]: E0114 00:31:43.487372 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:31:43.487478 kubelet[2886]: E0114 00:31:43.487431 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:31:43.489699 kubelet[2886]: E0114 00:31:43.487643 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv2gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rkfb_calico-system(c4ec9a31-66c9-4bf7-a831-6c170af7211c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:43.489699 kubelet[2886]: E0114 00:31:43.489294 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:31:47.773805 kubelet[2886]: E0114 00:31:47.773733 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d779df8f-4d6qc" podUID="ef34e9a3-c694-439f-b718-c0f1c8545835" Jan 14 00:31:49.773347 kubelet[2886]: E0114 00:31:49.773281 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" podUID="900e30be-5423-4b2c-9623-a51920c0a748" Jan 14 00:31:51.772133 kubelet[2886]: E0114 00:31:51.772074 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" podUID="e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a" Jan 14 00:31:52.768037 kubelet[2886]: E0114 00:31:52.767786 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nsbsf" podUID="67055487-2b15-4e2a-8975-7fee787b4309" Jan 14 00:31:54.768109 kubelet[2886]: E0114 00:31:54.768042 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" podUID="c604fd3c-83a2-496b-a7f9-8f8a03e00409" Jan 14 00:31:58.771934 kubelet[2886]: E0114 00:31:58.771830 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:31:58.773046 containerd[1612]: time="2026-01-14T00:31:58.772990512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 00:31:59.101301 containerd[1612]: time="2026-01-14T00:31:59.101028030Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:59.102843 containerd[1612]: time="2026-01-14T00:31:59.102723586Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 00:31:59.102843 containerd[1612]: time="2026-01-14T00:31:59.102776985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:59.103473 kubelet[2886]: E0114 00:31:59.103361 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:31:59.103681 kubelet[2886]: E0114 00:31:59.103566 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:31:59.104086 kubelet[2886]: E0114 00:31:59.104012 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:512ab7b5f96042dca2636ff671657425,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zsdtd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d779df8f-4d6qc_calico-system(ef34e9a3-c694-439f-b718-c0f1c8545835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:59.106842 containerd[1612]: time="2026-01-14T00:31:59.106725176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 00:31:59.458358 containerd[1612]: time="2026-01-14T00:31:59.458050408Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:31:59.464090 containerd[1612]: time="2026-01-14T00:31:59.463898234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 00:31:59.464090 containerd[1612]: time="2026-01-14T00:31:59.464032753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 00:31:59.464676 kubelet[2886]: E0114 00:31:59.464507 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:31:59.464762 kubelet[2886]: E0114 00:31:59.464682 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:31:59.464887 kubelet[2886]: E0114 00:31:59.464833 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsdtd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d779df8f-4d6qc_calico-system(ef34e9a3-c694-439f-b718-c0f1c8545835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 00:31:59.466208 kubelet[2886]: E0114 00:31:59.466147 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d779df8f-4d6qc" podUID="ef34e9a3-c694-439f-b718-c0f1c8545835" Jan 14 00:32:03.777355 containerd[1612]: time="2026-01-14T00:32:03.777292530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 00:32:04.122993 containerd[1612]: time="2026-01-14T00:32:04.122625159Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:32:04.124926 containerd[1612]: time="2026-01-14T00:32:04.124734954Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 00:32:04.125834 containerd[1612]: time="2026-01-14T00:32:04.125747792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 00:32:04.126096 kubelet[2886]: E0114 00:32:04.126052 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:32:04.129419 kubelet[2886]: E0114 00:32:04.126109 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:32:04.129419 kubelet[2886]: E0114 00:32:04.126418 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgztm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nsbsf_calico-system(67055487-2b15-4e2a-8975-7fee787b4309): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 00:32:04.129419 kubelet[2886]: E0114 00:32:04.128609 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nsbsf" podUID="67055487-2b15-4e2a-8975-7fee787b4309" Jan 14 00:32:04.129892 containerd[1612]: time="2026-01-14T00:32:04.126525670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:32:04.483960 containerd[1612]: time="2026-01-14T00:32:04.483349963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:32:04.488847 containerd[1612]: time="2026-01-14T00:32:04.487150274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:32:04.488847 containerd[1612]: time="2026-01-14T00:32:04.487193114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:32:04.489097 kubelet[2886]: E0114 00:32:04.489015 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:32:04.489097 kubelet[2886]: E0114 00:32:04.489072 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:32:04.489318 kubelet[2886]: E0114 00:32:04.489246 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvhph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b56c4d86d-jdxwl_calico-apiserver(900e30be-5423-4b2c-9623-a51920c0a748): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:32:04.490905 kubelet[2886]: E0114 00:32:04.490847 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" podUID="900e30be-5423-4b2c-9623-a51920c0a748" Jan 14 00:32:05.784866 containerd[1612]: time="2026-01-14T00:32:05.783661841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:32:06.161929 containerd[1612]: time="2026-01-14T00:32:06.161475947Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:32:06.162999 containerd[1612]: time="2026-01-14T00:32:06.162930824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:32:06.163181 containerd[1612]: time="2026-01-14T00:32:06.163065704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:32:06.163477 kubelet[2886]: E0114 00:32:06.163413 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:32:06.163888 kubelet[2886]: E0114 00:32:06.163488 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:32:06.165115 kubelet[2886]: E0114 00:32:06.164928 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b56c4d86d-wv9l2_calico-apiserver(e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:32:06.167228 kubelet[2886]: E0114 00:32:06.166226 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" podUID="e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a" Jan 14 00:32:06.770220 containerd[1612]: time="2026-01-14T00:32:06.770165889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 00:32:07.105326 containerd[1612]: time="2026-01-14T00:32:07.105082618Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:32:07.111039 containerd[1612]: time="2026-01-14T00:32:07.110174207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 00:32:07.111039 containerd[1612]: time="2026-01-14T00:32:07.110208887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 00:32:07.111333 kubelet[2886]: E0114 00:32:07.110504 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:32:07.111333 kubelet[2886]: E0114 00:32:07.110583 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:32:07.111510 kubelet[2886]: E0114 00:32:07.111431 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xsckj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5cbc4f559-f6xkt_calico-system(c604fd3c-83a2-496b-a7f9-8f8a03e00409): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 00:32:07.112666 kubelet[2886]: E0114 00:32:07.112616 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" podUID="c604fd3c-83a2-496b-a7f9-8f8a03e00409" Jan 14 00:32:09.772673 containerd[1612]: time="2026-01-14T00:32:09.771799507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 00:32:10.112846 containerd[1612]: time="2026-01-14T00:32:10.112696339Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:32:10.115968 containerd[1612]: time="2026-01-14T00:32:10.115785212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 00:32:10.115968 containerd[1612]: time="2026-01-14T00:32:10.115838212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 00:32:10.116595 kubelet[2886]: E0114 00:32:10.116476 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:32:10.116595 kubelet[2886]: E0114 00:32:10.116567 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:32:10.119016 kubelet[2886]: E0114 00:32:10.117596 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv2gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rkfb_calico-system(c4ec9a31-66c9-4bf7-a831-6c170af7211c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 00:32:10.122636 containerd[1612]: time="2026-01-14T00:32:10.121961400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 00:32:10.461579 containerd[1612]: time="2026-01-14T00:32:10.461364041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:32:10.464594 containerd[1612]: time="2026-01-14T00:32:10.464383795Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 00:32:10.464594 containerd[1612]: time="2026-01-14T00:32:10.464515195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 00:32:10.464907 kubelet[2886]: E0114 00:32:10.464783 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:32:10.464907 kubelet[2886]: E0114 00:32:10.464861 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:32:10.465163 kubelet[2886]: E0114 00:32:10.465028 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv2gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rkfb_calico-system(c4ec9a31-66c9-4bf7-a831-6c170af7211c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 00:32:10.467512 kubelet[2886]: E0114 00:32:10.466559 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:32:10.770521 kubelet[2886]: E0114 00:32:10.770343 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d779df8f-4d6qc" podUID="ef34e9a3-c694-439f-b718-c0f1c8545835" Jan 14 00:32:15.768872 kubelet[2886]: E0114 00:32:15.768701 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nsbsf" podUID="67055487-2b15-4e2a-8975-7fee787b4309" Jan 14 00:32:18.768892 kubelet[2886]: E0114 00:32:18.767427 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" podUID="e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a" Jan 14 00:32:19.771353 kubelet[2886]: E0114 00:32:19.771297 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" podUID="c604fd3c-83a2-496b-a7f9-8f8a03e00409" Jan 14 00:32:19.774011 kubelet[2886]: E0114 00:32:19.772782 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" podUID="900e30be-5423-4b2c-9623-a51920c0a748" Jan 14 00:32:21.769215 kubelet[2886]: E0114 00:32:21.769103 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:32:25.771128 kubelet[2886]: E0114 00:32:25.771069 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d779df8f-4d6qc" podUID="ef34e9a3-c694-439f-b718-c0f1c8545835" Jan 14 00:32:29.772244 kubelet[2886]: E0114 00:32:29.770112 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nsbsf" podUID="67055487-2b15-4e2a-8975-7fee787b4309" Jan 14 00:32:30.769615 kubelet[2886]: E0114 00:32:30.769543 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" podUID="c604fd3c-83a2-496b-a7f9-8f8a03e00409" Jan 14 00:32:32.770274 kubelet[2886]: E0114 00:32:32.770023 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" podUID="900e30be-5423-4b2c-9623-a51920c0a748" Jan 14 00:32:33.769906 kubelet[2886]: E0114 00:32:33.769105 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" podUID="e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a" Jan 14 00:32:36.769956 kubelet[2886]: E0114 00:32:36.768522 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:32:39.781300 containerd[1612]: time="2026-01-14T00:32:39.780895268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 00:32:40.119048 containerd[1612]: time="2026-01-14T00:32:40.118366876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:32:40.122955 containerd[1612]: time="2026-01-14T00:32:40.122771630Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 00:32:40.122955 containerd[1612]: time="2026-01-14T00:32:40.122785710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 00:32:40.123627 kubelet[2886]: E0114 00:32:40.123540 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:32:40.123627 kubelet[2886]: E0114 00:32:40.123612 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:32:40.124078 kubelet[2886]: E0114 00:32:40.123783 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:512ab7b5f96042dca2636ff671657425,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zsdtd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d779df8f-4d6qc_calico-system(ef34e9a3-c694-439f-b718-c0f1c8545835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 00:32:40.127156 containerd[1612]: time="2026-01-14T00:32:40.127100464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 00:32:40.486439 containerd[1612]: time="2026-01-14T00:32:40.485536085Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:32:40.489286 containerd[1612]: time="2026-01-14T00:32:40.489145920Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 00:32:40.489825 containerd[1612]: time="2026-01-14T00:32:40.489321360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 00:32:40.491490 kubelet[2886]: E0114 00:32:40.490129 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:32:40.491490 kubelet[2886]: E0114 00:32:40.490179 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:32:40.491490 kubelet[2886]: E0114 00:32:40.490295 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsdtd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d779df8f-4d6qc_calico-system(ef34e9a3-c694-439f-b718-c0f1c8545835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 00:32:40.493135 kubelet[2886]: E0114 00:32:40.493092 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d779df8f-4d6qc" podUID="ef34e9a3-c694-439f-b718-c0f1c8545835" Jan 14 00:32:41.768878 kubelet[2886]: E0114 00:32:41.768270 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nsbsf" podUID="67055487-2b15-4e2a-8975-7fee787b4309" Jan 14 00:32:42.768924 kubelet[2886]: E0114 00:32:42.768347 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" podUID="c604fd3c-83a2-496b-a7f9-8f8a03e00409" Jan 14 00:32:45.768339 containerd[1612]: time="2026-01-14T00:32:45.768261380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:32:46.109398 containerd[1612]: time="2026-01-14T00:32:46.109208366Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:32:46.112044 containerd[1612]: time="2026-01-14T00:32:46.111961842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:32:46.112214 containerd[1612]: time="2026-01-14T00:32:46.112102642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:32:46.112620 kubelet[2886]: E0114 00:32:46.112522 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:32:46.112620 kubelet[2886]: E0114 00:32:46.112608 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:32:46.113524 kubelet[2886]: E0114 00:32:46.112771 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvhph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b56c4d86d-jdxwl_calico-apiserver(900e30be-5423-4b2c-9623-a51920c0a748): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:32:46.114181 kubelet[2886]: E0114 00:32:46.114113 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" podUID="900e30be-5423-4b2c-9623-a51920c0a748" Jan 14 00:32:46.768933 containerd[1612]: time="2026-01-14T00:32:46.768849893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:32:47.110552 containerd[1612]: time="2026-01-14T00:32:47.110489721Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:32:47.114338 containerd[1612]: time="2026-01-14T00:32:47.114192917Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:32:47.114338 containerd[1612]: time="2026-01-14T00:32:47.114259836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:32:47.115402 kubelet[2886]: E0114 00:32:47.115350 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:32:47.116196 kubelet[2886]: E0114 00:32:47.115902 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:32:47.116530 kubelet[2886]: E0114 00:32:47.116363 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b56c4d86d-wv9l2_calico-apiserver(e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:32:47.119103 kubelet[2886]: E0114 00:32:47.119038 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" podUID="e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a" Jan 14 00:32:50.767484 containerd[1612]: time="2026-01-14T00:32:50.767356046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 00:32:51.106045 containerd[1612]: time="2026-01-14T00:32:51.105796571Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:32:51.108475 containerd[1612]: time="2026-01-14T00:32:51.108256568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 00:32:51.108475 containerd[1612]: time="2026-01-14T00:32:51.108408488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 00:32:51.109092 kubelet[2886]: E0114 00:32:51.108958 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:32:51.109849 kubelet[2886]: E0114 00:32:51.109040 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:32:51.110593 kubelet[2886]: E0114 00:32:51.110531 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv2gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rkfb_calico-system(c4ec9a31-66c9-4bf7-a831-6c170af7211c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 00:32:51.115198 containerd[1612]: time="2026-01-14T00:32:51.114843560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 00:32:51.456631 containerd[1612]: time="2026-01-14T00:32:51.456462004Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:32:51.458471 containerd[1612]: time="2026-01-14T00:32:51.458345321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 00:32:51.458471 containerd[1612]: time="2026-01-14T00:32:51.458397681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 00:32:51.459187 kubelet[2886]: E0114 00:32:51.458801 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:32:51.459187 kubelet[2886]: E0114 00:32:51.458900 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:32:51.459187 kubelet[2886]: E0114 00:32:51.459030 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv2gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rkfb_calico-system(c4ec9a31-66c9-4bf7-a831-6c170af7211c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 00:32:51.460503 kubelet[2886]: E0114 00:32:51.460426 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:32:51.769436 kubelet[2886]: E0114 00:32:51.768964 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d779df8f-4d6qc" podUID="ef34e9a3-c694-439f-b718-c0f1c8545835" Jan 14 00:32:54.768766 containerd[1612]: time="2026-01-14T00:32:54.768603736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 00:32:55.125548 containerd[1612]: time="2026-01-14T00:32:55.124761851Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:32:55.132349 containerd[1612]: time="2026-01-14T00:32:55.132216521Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 00:32:55.132349 containerd[1612]: time="2026-01-14T00:32:55.132296641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 00:32:55.132789 kubelet[2886]: E0114 00:32:55.132735 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:32:55.134191 kubelet[2886]: E0114 00:32:55.132801 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:32:55.134559 containerd[1612]: time="2026-01-14T00:32:55.134402679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 00:32:55.134658 kubelet[2886]: E0114 00:32:55.134467 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xsckj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5cbc4f559-f6xkt_calico-system(c604fd3c-83a2-496b-a7f9-8f8a03e00409): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 00:32:55.137757 kubelet[2886]: E0114 00:32:55.137687 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" podUID="c604fd3c-83a2-496b-a7f9-8f8a03e00409" Jan 14 00:32:55.704308 containerd[1612]: time="2026-01-14T00:32:55.704235489Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:32:55.706868 containerd[1612]: time="2026-01-14T00:32:55.706452246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 00:32:55.706868 containerd[1612]: time="2026-01-14T00:32:55.706491126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 00:32:55.707303 kubelet[2886]: E0114 00:32:55.707229 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:32:55.707893 kubelet[2886]: E0114 00:32:55.707289 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:32:55.708298 kubelet[2886]: E0114 00:32:55.708098 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgztm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nsbsf_calico-system(67055487-2b15-4e2a-8975-7fee787b4309): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 00:32:55.709843 kubelet[2886]: E0114 00:32:55.709236 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nsbsf" podUID="67055487-2b15-4e2a-8975-7fee787b4309" Jan 14 00:32:58.244740 update_engine[1587]: I20260114 00:32:58.243942 1587 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 14 00:32:58.244740 update_engine[1587]: I20260114 00:32:58.244009 1587 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 14 00:32:58.244740 update_engine[1587]: I20260114 00:32:58.244318 1587 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 14 00:32:58.246604 update_engine[1587]: I20260114 00:32:58.246560 1587 omaha_request_params.cc:62] Current group set to alpha Jan 14 00:32:58.248334 update_engine[1587]: I20260114 00:32:58.247557 1587 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 14 00:32:58.248334 update_engine[1587]: I20260114 00:32:58.247596 1587 update_attempter.cc:643] Scheduling an action processor start. Jan 14 00:32:58.248334 update_engine[1587]: I20260114 00:32:58.247614 1587 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 00:32:58.255018 update_engine[1587]: I20260114 00:32:58.254563 1587 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 14 00:32:58.255018 update_engine[1587]: I20260114 00:32:58.254706 1587 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 00:32:58.255018 update_engine[1587]: I20260114 00:32:58.254714 1587 omaha_request_action.cc:272] Request: Jan 14 00:32:58.255018 update_engine[1587]: Jan 14 00:32:58.255018 update_engine[1587]: Jan 14 00:32:58.255018 update_engine[1587]: Jan 14 00:32:58.255018 update_engine[1587]: Jan 14 00:32:58.255018 update_engine[1587]: Jan 14 00:32:58.255018 update_engine[1587]: Jan 14 00:32:58.255018 update_engine[1587]: Jan 14 00:32:58.255018 update_engine[1587]: Jan 14 00:32:58.255018 update_engine[1587]: I20260114 00:32:58.254722 1587 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 00:32:58.258293 locksmithd[1633]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 14 00:32:58.260858 update_engine[1587]: I20260114 00:32:58.260165 1587 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 00:32:58.261324 update_engine[1587]: I20260114 00:32:58.261276 1587 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 00:32:58.262843 update_engine[1587]: E20260114 00:32:58.262235 1587 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 00:32:58.263097 update_engine[1587]: I20260114 00:32:58.263065 1587 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 14 00:32:58.768653 kubelet[2886]: E0114 00:32:58.768579 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" podUID="e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a" Jan 14 00:32:58.769682 kubelet[2886]: E0114 00:32:58.768734 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" podUID="900e30be-5423-4b2c-9623-a51920c0a748" Jan 14 00:33:00.446710 systemd[1]: Started sshd@34-91.99.0.249:22-4.153.228.146:44892.service - OpenSSH per-connection server daemon (4.153.228.146:44892). Jan 14 00:33:00.450098 kernel: kauditd_printk_skb: 194 callbacks suppressed Jan 14 00:33:00.450260 kernel: audit: type=1130 audit(1768350780.445:811): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-91.99.0.249:22-4.153.228.146:44892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:00.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-91.99.0.249:22-4.153.228.146:44892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:01.032000 audit[5312]: USER_ACCT pid=5312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:01.034913 sshd[5312]: Accepted publickey for core from 4.153.228.146 port 44892 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:33:01.038893 kernel: audit: type=1101 audit(1768350781.032:812): pid=5312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:01.037000 audit[5312]: CRED_ACQ pid=5312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:01.040252 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:33:01.043207 kernel: audit: type=1103 audit(1768350781.037:813): pid=5312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:01.044985 kernel: audit: type=1006 audit(1768350781.037:814): pid=5312 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 14 00:33:01.045032 kernel: audit: type=1300 audit(1768350781.037:814): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffebbb49e0 a2=3 a3=0 items=0 ppid=1 pid=5312 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:01.037000 audit[5312]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffebbb49e0 a2=3 a3=0 items=0 ppid=1 pid=5312 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:01.037000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:01.047009 kernel: audit: type=1327 audit(1768350781.037:814): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:01.055939 systemd-logind[1580]: New session 9 of user core. Jan 14 00:33:01.061118 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 00:33:01.070000 audit[5312]: USER_START pid=5312 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:01.075000 audit[5316]: CRED_ACQ pid=5316 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:01.081542 kernel: audit: type=1105 audit(1768350781.070:815): pid=5312 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:01.081704 kernel: audit: type=1103 audit(1768350781.075:816): pid=5316 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:01.498934 sshd[5316]: Connection closed by 4.153.228.146 port 44892 Jan 14 00:33:01.499185 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Jan 14 00:33:01.503000 audit[5312]: USER_END pid=5312 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:01.503000 audit[5312]: CRED_DISP pid=5312 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:01.513023 kernel: audit: type=1106 audit(1768350781.503:817): pid=5312 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:01.513113 kernel: audit: type=1104 audit(1768350781.503:818): pid=5312 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:01.511633 systemd[1]: sshd@34-91.99.0.249:22-4.153.228.146:44892.service: Deactivated successfully. Jan 14 00:33:01.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-91.99.0.249:22-4.153.228.146:44892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:01.519539 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 00:33:01.522075 systemd-logind[1580]: Session 9 logged out. Waiting for processes to exit. Jan 14 00:33:01.525497 systemd-logind[1580]: Removed session 9. Jan 14 00:33:02.772340 kubelet[2886]: E0114 00:33:02.771680 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:33:04.773105 kubelet[2886]: E0114 00:33:04.773029 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d779df8f-4d6qc" podUID="ef34e9a3-c694-439f-b718-c0f1c8545835" Jan 14 00:33:06.602705 systemd[1]: Started sshd@35-91.99.0.249:22-4.153.228.146:42156.service - OpenSSH per-connection server daemon (4.153.228.146:42156). Jan 14 00:33:06.606373 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:33:06.606413 kernel: audit: type=1130 audit(1768350786.601:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-91.99.0.249:22-4.153.228.146:42156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:06.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-91.99.0.249:22-4.153.228.146:42156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:07.178000 audit[5346]: USER_ACCT pid=5346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:07.182651 sshd[5346]: Accepted publickey for core from 4.153.228.146 port 42156 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:33:07.185850 kernel: audit: type=1101 audit(1768350787.178:821): pid=5346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:07.184000 audit[5346]: CRED_ACQ pid=5346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:07.189187 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:33:07.190990 kernel: audit: type=1103 audit(1768350787.184:822): pid=5346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:07.191118 kernel: audit: type=1006 audit(1768350787.186:823): pid=5346 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 14 00:33:07.186000 audit[5346]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeed0d6b0 a2=3 a3=0 items=0 ppid=1 pid=5346 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:07.194186 kernel: audit: type=1300 audit(1768350787.186:823): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeed0d6b0 a2=3 a3=0 items=0 ppid=1 pid=5346 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:07.186000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:07.196289 kernel: audit: type=1327 audit(1768350787.186:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:07.204250 systemd-logind[1580]: New session 10 of user core. Jan 14 00:33:07.211102 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 00:33:07.213000 audit[5346]: USER_START pid=5346 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:07.217000 audit[5357]: CRED_ACQ pid=5357 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:07.221615 kernel: audit: type=1105 audit(1768350787.213:824): pid=5346 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:07.221723 kernel: audit: type=1103 audit(1768350787.217:825): pid=5357 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:07.569295 sshd[5357]: Connection closed by 4.153.228.146 port 42156 Jan 14 00:33:07.570125 sshd-session[5346]: pam_unix(sshd:session): session closed for user core Jan 14 00:33:07.571000 audit[5346]: USER_END pid=5346 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:07.572000 audit[5346]: CRED_DISP pid=5346 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:07.578030 kernel: audit: type=1106 audit(1768350787.571:826): pid=5346 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:07.579070 systemd[1]: sshd@35-91.99.0.249:22-4.153.228.146:42156.service: Deactivated successfully. Jan 14 00:33:07.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-91.99.0.249:22-4.153.228.146:42156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:07.581889 kernel: audit: type=1104 audit(1768350787.572:827): pid=5346 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:07.586333 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 00:33:07.589212 systemd-logind[1580]: Session 10 logged out. Waiting for processes to exit. Jan 14 00:33:07.591454 systemd-logind[1580]: Removed session 10. Jan 14 00:33:07.767990 kubelet[2886]: E0114 00:33:07.767851 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" podUID="c604fd3c-83a2-496b-a7f9-8f8a03e00409" Jan 14 00:33:08.154390 update_engine[1587]: I20260114 00:33:08.152907 1587 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 00:33:08.154390 update_engine[1587]: I20260114 00:33:08.153036 1587 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 00:33:08.154390 update_engine[1587]: I20260114 00:33:08.153599 1587 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 00:33:08.157077 update_engine[1587]: E20260114 00:33:08.157017 1587 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 00:33:08.157317 update_engine[1587]: I20260114 00:33:08.157295 1587 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 14 00:33:10.769839 kubelet[2886]: E0114 00:33:10.767462 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" podUID="900e30be-5423-4b2c-9623-a51920c0a748" Jan 14 00:33:10.769839 kubelet[2886]: E0114 00:33:10.767563 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nsbsf" podUID="67055487-2b15-4e2a-8975-7fee787b4309" Jan 14 00:33:12.681979 systemd[1]: Started sshd@36-91.99.0.249:22-4.153.228.146:42160.service - OpenSSH per-connection server daemon (4.153.228.146:42160). Jan 14 00:33:12.685118 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:33:12.685268 kernel: audit: type=1130 audit(1768350792.680:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-91.99.0.249:22-4.153.228.146:42160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:12.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-91.99.0.249:22-4.153.228.146:42160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:13.240857 sshd[5371]: Accepted publickey for core from 4.153.228.146 port 42160 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:33:13.237000 audit[5371]: USER_ACCT pid=5371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:13.247301 kernel: audit: type=1101 audit(1768350793.237:830): pid=5371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:13.247427 kernel: audit: type=1103 audit(1768350793.242:831): pid=5371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:13.242000 audit[5371]: CRED_ACQ pid=5371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:13.247093 sshd-session[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:33:13.251917 kernel: audit: type=1006 audit(1768350793.242:832): pid=5371 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 00:33:13.254572 kernel: audit: type=1300 audit(1768350793.242:832): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3ef8fd0 a2=3 a3=0 items=0 ppid=1 pid=5371 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:13.242000 audit[5371]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3ef8fd0 a2=3 a3=0 items=0 ppid=1 pid=5371 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:13.242000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:13.256834 kernel: audit: type=1327 audit(1768350793.242:832): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:13.262706 systemd-logind[1580]: New session 11 of user core. Jan 14 00:33:13.269934 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 00:33:13.272000 audit[5371]: USER_START pid=5371 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:13.282347 kernel: audit: type=1105 audit(1768350793.272:833): pid=5371 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:13.282489 kernel: audit: type=1103 audit(1768350793.278:834): pid=5375 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:13.278000 audit[5375]: CRED_ACQ pid=5375 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:13.632880 sshd[5375]: Connection closed by 4.153.228.146 port 42160 Jan 14 00:33:13.633297 sshd-session[5371]: pam_unix(sshd:session): session closed for user core Jan 14 00:33:13.637000 audit[5371]: USER_END pid=5371 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:13.641000 audit[5371]: CRED_DISP pid=5371 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:13.647293 kernel: audit: type=1106 audit(1768350793.637:835): pid=5371 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:13.647908 kernel: audit: type=1104 audit(1768350793.641:836): pid=5371 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:13.650424 systemd[1]: sshd@36-91.99.0.249:22-4.153.228.146:42160.service: Deactivated successfully. Jan 14 00:33:13.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-91.99.0.249:22-4.153.228.146:42160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:13.655068 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 00:33:13.659518 systemd-logind[1580]: Session 11 logged out. Waiting for processes to exit. Jan 14 00:33:13.662958 systemd-logind[1580]: Removed session 11. Jan 14 00:33:13.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-91.99.0.249:22-4.153.228.146:42170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:13.743209 systemd[1]: Started sshd@37-91.99.0.249:22-4.153.228.146:42170.service - OpenSSH per-connection server daemon (4.153.228.146:42170). Jan 14 00:33:13.778849 kubelet[2886]: E0114 00:33:13.776356 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" podUID="e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a" Jan 14 00:33:14.318000 audit[5387]: USER_ACCT pid=5387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:14.319582 sshd[5387]: Accepted publickey for core from 4.153.228.146 port 42170 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:33:14.321000 audit[5387]: CRED_ACQ pid=5387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:14.321000 audit[5387]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1b36f60 a2=3 a3=0 items=0 ppid=1 pid=5387 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:14.321000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:14.322792 sshd-session[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:33:14.331419 systemd-logind[1580]: New session 12 of user core. Jan 14 00:33:14.340124 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 00:33:14.346000 audit[5387]: USER_START pid=5387 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:14.350000 audit[5391]: CRED_ACQ pid=5391 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:14.836081 sshd[5391]: Connection closed by 4.153.228.146 port 42170 Jan 14 00:33:14.835466 sshd-session[5387]: pam_unix(sshd:session): session closed for user core Jan 14 00:33:14.836000 audit[5387]: USER_END pid=5387 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:14.837000 audit[5387]: CRED_DISP pid=5387 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:14.845360 systemd[1]: sshd@37-91.99.0.249:22-4.153.228.146:42170.service: Deactivated successfully. Jan 14 00:33:14.845893 systemd-logind[1580]: Session 12 logged out. Waiting for processes to exit. Jan 14 00:33:14.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-91.99.0.249:22-4.153.228.146:42170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:14.849103 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 00:33:14.852611 systemd-logind[1580]: Removed session 12. Jan 14 00:33:14.945606 systemd[1]: Started sshd@38-91.99.0.249:22-4.153.228.146:60990.service - OpenSSH per-connection server daemon (4.153.228.146:60990). Jan 14 00:33:14.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-91.99.0.249:22-4.153.228.146:60990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:15.508000 audit[5404]: USER_ACCT pid=5404 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:15.511795 sshd[5404]: Accepted publickey for core from 4.153.228.146 port 60990 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:33:15.512000 audit[5404]: CRED_ACQ pid=5404 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:15.512000 audit[5404]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff7cc70c0 a2=3 a3=0 items=0 ppid=1 pid=5404 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:15.512000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:15.517007 sshd-session[5404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:33:15.524094 systemd-logind[1580]: New session 13 of user core. Jan 14 00:33:15.532107 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 00:33:15.538000 audit[5404]: USER_START pid=5404 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:15.541000 audit[5409]: CRED_ACQ pid=5409 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:15.909024 sshd[5409]: Connection closed by 4.153.228.146 port 60990 Jan 14 00:33:15.908355 sshd-session[5404]: pam_unix(sshd:session): session closed for user core Jan 14 00:33:15.908000 audit[5404]: USER_END pid=5404 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:15.909000 audit[5404]: CRED_DISP pid=5404 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:15.916172 systemd[1]: sshd@38-91.99.0.249:22-4.153.228.146:60990.service: Deactivated successfully. Jan 14 00:33:15.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-91.99.0.249:22-4.153.228.146:60990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:15.921426 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 00:33:15.924005 systemd-logind[1580]: Session 13 logged out. Waiting for processes to exit. Jan 14 00:33:15.929389 systemd-logind[1580]: Removed session 13. Jan 14 00:33:16.770598 kubelet[2886]: E0114 00:33:16.770492 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:33:18.156835 update_engine[1587]: I20260114 00:33:18.154865 1587 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 00:33:18.156835 update_engine[1587]: I20260114 00:33:18.155002 1587 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 00:33:18.156835 update_engine[1587]: I20260114 00:33:18.155426 1587 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 00:33:18.160036 update_engine[1587]: E20260114 00:33:18.159972 1587 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 00:33:18.160285 update_engine[1587]: I20260114 00:33:18.160264 1587 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 14 00:33:18.768850 kubelet[2886]: E0114 00:33:18.768226 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" podUID="c604fd3c-83a2-496b-a7f9-8f8a03e00409" Jan 14 00:33:18.770897 kubelet[2886]: E0114 00:33:18.770792 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d779df8f-4d6qc" podUID="ef34e9a3-c694-439f-b718-c0f1c8545835" Jan 14 00:33:21.028380 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 00:33:21.028741 kernel: audit: type=1130 audit(1768350801.023:856): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-91.99.0.249:22-4.153.228.146:60998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:21.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-91.99.0.249:22-4.153.228.146:60998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:21.024235 systemd[1]: Started sshd@39-91.99.0.249:22-4.153.228.146:60998.service - OpenSSH per-connection server daemon (4.153.228.146:60998). Jan 14 00:33:21.584000 audit[5445]: USER_ACCT pid=5445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:21.588985 sshd[5445]: Accepted publickey for core from 4.153.228.146 port 60998 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:33:21.593475 kernel: audit: type=1101 audit(1768350801.584:857): pid=5445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:21.593589 kernel: audit: type=1103 audit(1768350801.589:858): pid=5445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:21.589000 audit[5445]: CRED_ACQ pid=5445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:21.593243 sshd-session[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:33:21.596344 kernel: audit: type=1006 audit(1768350801.589:859): pid=5445 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 00:33:21.599209 kernel: audit: type=1300 audit(1768350801.589:859): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff8a27e70 a2=3 a3=0 items=0 ppid=1 pid=5445 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:21.589000 audit[5445]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff8a27e70 a2=3 a3=0 items=0 ppid=1 pid=5445 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:21.589000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:21.600831 kernel: audit: type=1327 audit(1768350801.589:859): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:21.605591 systemd-logind[1580]: New session 14 of user core. Jan 14 00:33:21.612274 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 00:33:21.618000 audit[5445]: USER_START pid=5445 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:21.623904 kernel: audit: type=1105 audit(1768350801.618:860): pid=5445 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:21.624000 audit[5449]: CRED_ACQ pid=5449 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:21.629867 kernel: audit: type=1103 audit(1768350801.624:861): pid=5449 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:21.770155 kubelet[2886]: E0114 00:33:21.769537 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nsbsf" podUID="67055487-2b15-4e2a-8975-7fee787b4309" Jan 14 00:33:22.079694 sshd[5449]: Connection closed by 4.153.228.146 port 60998 Jan 14 00:33:22.080047 sshd-session[5445]: pam_unix(sshd:session): session closed for user core Jan 14 00:33:22.084000 audit[5445]: USER_END pid=5445 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:22.084000 audit[5445]: CRED_DISP pid=5445 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:22.092698 kernel: audit: type=1106 audit(1768350802.084:862): pid=5445 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:22.093000 kernel: audit: type=1104 audit(1768350802.084:863): pid=5445 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:22.094026 systemd[1]: sshd@39-91.99.0.249:22-4.153.228.146:60998.service: Deactivated successfully. Jan 14 00:33:22.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-91.99.0.249:22-4.153.228.146:60998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:22.101788 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 00:33:22.104508 systemd-logind[1580]: Session 14 logged out. Waiting for processes to exit. Jan 14 00:33:22.108296 systemd-logind[1580]: Removed session 14. Jan 14 00:33:22.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-91.99.0.249:22-4.153.228.146:32776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:22.191830 systemd[1]: Started sshd@40-91.99.0.249:22-4.153.228.146:32776.service - OpenSSH per-connection server daemon (4.153.228.146:32776). Jan 14 00:33:22.756000 audit[5462]: USER_ACCT pid=5462 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:22.758236 sshd[5462]: Accepted publickey for core from 4.153.228.146 port 32776 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:33:22.760000 audit[5462]: CRED_ACQ pid=5462 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:22.760000 audit[5462]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8408e30 a2=3 a3=0 items=0 ppid=1 pid=5462 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:22.760000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:22.762803 sshd-session[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:33:22.775464 systemd-logind[1580]: New session 15 of user core. Jan 14 00:33:22.777155 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 00:33:22.781000 audit[5462]: USER_START pid=5462 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:22.785000 audit[5468]: CRED_ACQ pid=5468 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:23.356700 sshd[5468]: Connection closed by 4.153.228.146 port 32776 Jan 14 00:33:23.360367 sshd-session[5462]: pam_unix(sshd:session): session closed for user core Jan 14 00:33:23.366000 audit[5462]: USER_END pid=5462 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:23.366000 audit[5462]: CRED_DISP pid=5462 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:23.371681 systemd-logind[1580]: Session 15 logged out. Waiting for processes to exit. Jan 14 00:33:23.371766 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 00:33:23.373594 systemd[1]: sshd@40-91.99.0.249:22-4.153.228.146:32776.service: Deactivated successfully. Jan 14 00:33:23.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-91.99.0.249:22-4.153.228.146:32776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:23.384374 systemd-logind[1580]: Removed session 15. Jan 14 00:33:23.464406 systemd[1]: Started sshd@41-91.99.0.249:22-4.153.228.146:32788.service - OpenSSH per-connection server daemon (4.153.228.146:32788). Jan 14 00:33:23.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-91.99.0.249:22-4.153.228.146:32788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:23.775122 kubelet[2886]: E0114 00:33:23.774132 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" podUID="900e30be-5423-4b2c-9623-a51920c0a748" Jan 14 00:33:24.046000 audit[5478]: USER_ACCT pid=5478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:24.047907 sshd[5478]: Accepted publickey for core from 4.153.228.146 port 32788 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:33:24.048000 audit[5478]: CRED_ACQ pid=5478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:24.048000 audit[5478]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2b887d0 a2=3 a3=0 items=0 ppid=1 pid=5478 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:24.048000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:24.050238 sshd-session[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:33:24.061362 systemd-logind[1580]: New session 16 of user core. Jan 14 00:33:24.068526 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 00:33:24.074000 audit[5478]: USER_START pid=5478 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:24.078000 audit[5482]: CRED_ACQ pid=5482 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:25.333689 sshd[5482]: Connection closed by 4.153.228.146 port 32788 Jan 14 00:33:25.333594 sshd-session[5478]: pam_unix(sshd:session): session closed for user core Jan 14 00:33:25.337000 audit[5494]: NETFILTER_CFG table=filter:139 family=2 entries=26 op=nft_register_rule pid=5494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:33:25.337000 audit[5494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffed534300 a2=0 a3=1 items=0 ppid=3053 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:25.337000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:33:25.341000 audit[5478]: USER_END pid=5478 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:25.341000 audit[5478]: CRED_DISP pid=5478 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:25.347564 systemd[1]: sshd@41-91.99.0.249:22-4.153.228.146:32788.service: Deactivated successfully. Jan 14 00:33:25.348000 audit[5494]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=5494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:33:25.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-91.99.0.249:22-4.153.228.146:32788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:25.348000 audit[5494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffed534300 a2=0 a3=1 items=0 ppid=3053 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:25.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:33:25.357322 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 00:33:25.359771 systemd-logind[1580]: Session 16 logged out. Waiting for processes to exit. Jan 14 00:33:25.363407 systemd-logind[1580]: Removed session 16. Jan 14 00:33:25.383000 audit[5499]: NETFILTER_CFG table=filter:141 family=2 entries=38 op=nft_register_rule pid=5499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:33:25.383000 audit[5499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff58f0950 a2=0 a3=1 items=0 ppid=3053 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:25.383000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:33:25.388000 audit[5499]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=5499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:33:25.388000 audit[5499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff58f0950 a2=0 a3=1 items=0 ppid=3053 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:25.388000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:33:25.457663 systemd[1]: Started sshd@42-91.99.0.249:22-4.153.228.146:43952.service - OpenSSH per-connection server daemon (4.153.228.146:43952). Jan 14 00:33:25.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-91.99.0.249:22-4.153.228.146:43952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:26.031914 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 14 00:33:26.032137 kernel: audit: type=1101 audit(1768350806.028:888): pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:26.028000 audit[5501]: USER_ACCT pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:26.032285 sshd[5501]: Accepted publickey for core from 4.153.228.146 port 43952 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:33:26.034000 audit[5501]: CRED_ACQ pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:26.036308 sshd-session[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:33:26.041373 kernel: audit: type=1103 audit(1768350806.034:889): pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:26.041457 kernel: audit: type=1006 audit(1768350806.034:890): pid=5501 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 14 00:33:26.034000 audit[5501]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffddf58470 a2=3 a3=0 items=0 ppid=1 pid=5501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:26.044464 kernel: audit: type=1300 audit(1768350806.034:890): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffddf58470 a2=3 a3=0 items=0 ppid=1 pid=5501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:26.034000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:26.046165 kernel: audit: type=1327 audit(1768350806.034:890): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:26.046122 systemd-logind[1580]: New session 17 of user core. Jan 14 00:33:26.052380 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 00:33:26.055000 audit[5501]: USER_START pid=5501 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:26.062412 kernel: audit: type=1105 audit(1768350806.055:891): pid=5501 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:26.062567 kernel: audit: type=1103 audit(1768350806.060:892): pid=5505 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:26.060000 audit[5505]: CRED_ACQ pid=5505 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:26.662193 sshd[5505]: Connection closed by 4.153.228.146 port 43952 Jan 14 00:33:26.662958 sshd-session[5501]: pam_unix(sshd:session): session closed for user core Jan 14 00:33:26.665000 audit[5501]: USER_END pid=5501 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:26.666000 audit[5501]: CRED_DISP pid=5501 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:26.672324 kernel: audit: type=1106 audit(1768350806.665:893): pid=5501 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:26.672435 kernel: audit: type=1104 audit(1768350806.666:894): pid=5501 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:26.673210 systemd[1]: sshd@42-91.99.0.249:22-4.153.228.146:43952.service: Deactivated successfully. Jan 14 00:33:26.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-91.99.0.249:22-4.153.228.146:43952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:26.675467 kernel: audit: type=1131 audit(1768350806.672:895): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-91.99.0.249:22-4.153.228.146:43952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:26.680146 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 00:33:26.685426 systemd-logind[1580]: Session 17 logged out. Waiting for processes to exit. Jan 14 00:33:26.687586 systemd-logind[1580]: Removed session 17. Jan 14 00:33:26.767459 systemd[1]: Started sshd@43-91.99.0.249:22-4.153.228.146:43956.service - OpenSSH per-connection server daemon (4.153.228.146:43956). Jan 14 00:33:26.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-91.99.0.249:22-4.153.228.146:43956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:27.316000 audit[5514]: USER_ACCT pid=5514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:27.319621 sshd[5514]: Accepted publickey for core from 4.153.228.146 port 43956 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:33:27.321461 sshd-session[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:33:27.319000 audit[5514]: CRED_ACQ pid=5514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:27.319000 audit[5514]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca261e20 a2=3 a3=0 items=0 ppid=1 pid=5514 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:27.319000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:27.331356 systemd-logind[1580]: New session 18 of user core. Jan 14 00:33:27.336061 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 00:33:27.339000 audit[5514]: USER_START pid=5514 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:27.342000 audit[5518]: CRED_ACQ pid=5518 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:27.706456 sshd[5518]: Connection closed by 4.153.228.146 port 43956 Jan 14 00:33:27.705115 sshd-session[5514]: pam_unix(sshd:session): session closed for user core Jan 14 00:33:27.706000 audit[5514]: USER_END pid=5514 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:27.707000 audit[5514]: CRED_DISP pid=5514 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:27.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-91.99.0.249:22-4.153.228.146:43956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:27.716947 systemd-logind[1580]: Session 18 logged out. Waiting for processes to exit. Jan 14 00:33:27.717582 systemd[1]: sshd@43-91.99.0.249:22-4.153.228.146:43956.service: Deactivated successfully. Jan 14 00:33:27.723161 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 00:33:27.726381 systemd-logind[1580]: Removed session 18. Jan 14 00:33:27.771712 kubelet[2886]: E0114 00:33:27.771168 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" podUID="e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a" Jan 14 00:33:28.153426 update_engine[1587]: I20260114 00:33:28.152865 1587 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 00:33:28.153426 update_engine[1587]: I20260114 00:33:28.152977 1587 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 00:33:28.153426 update_engine[1587]: I20260114 00:33:28.153366 1587 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 00:33:28.154177 update_engine[1587]: E20260114 00:33:28.154135 1587 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 00:33:28.154325 update_engine[1587]: I20260114 00:33:28.154305 1587 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 00:33:28.154844 update_engine[1587]: I20260114 00:33:28.154366 1587 omaha_request_action.cc:617] Omaha request response: Jan 14 00:33:28.154844 update_engine[1587]: E20260114 00:33:28.154453 1587 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 14 00:33:28.154844 update_engine[1587]: I20260114 00:33:28.154469 1587 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 14 00:33:28.154844 update_engine[1587]: I20260114 00:33:28.154474 1587 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 00:33:28.154844 update_engine[1587]: I20260114 00:33:28.154479 1587 update_attempter.cc:306] Processing Done. Jan 14 00:33:28.154844 update_engine[1587]: E20260114 00:33:28.154495 1587 update_attempter.cc:619] Update failed. Jan 14 00:33:28.154844 update_engine[1587]: I20260114 00:33:28.154500 1587 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 14 00:33:28.154844 update_engine[1587]: I20260114 00:33:28.154505 1587 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 14 00:33:28.154844 update_engine[1587]: I20260114 00:33:28.154511 1587 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 14 00:33:28.154844 update_engine[1587]: I20260114 00:33:28.154621 1587 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 00:33:28.154844 update_engine[1587]: I20260114 00:33:28.154647 1587 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 00:33:28.154844 update_engine[1587]: I20260114 00:33:28.154654 1587 omaha_request_action.cc:272] Request: Jan 14 00:33:28.154844 update_engine[1587]: Jan 14 00:33:28.154844 update_engine[1587]: Jan 14 00:33:28.154844 update_engine[1587]: Jan 14 00:33:28.154844 update_engine[1587]: Jan 14 00:33:28.154844 update_engine[1587]: Jan 14 00:33:28.154844 update_engine[1587]: Jan 14 00:33:28.155268 update_engine[1587]: I20260114 00:33:28.154659 1587 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 00:33:28.155268 update_engine[1587]: I20260114 00:33:28.154681 1587 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 00:33:28.155320 locksmithd[1633]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 14 00:33:28.157217 update_engine[1587]: I20260114 00:33:28.157145 1587 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 00:33:28.158048 update_engine[1587]: E20260114 00:33:28.157790 1587 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 00:33:28.158048 update_engine[1587]: I20260114 00:33:28.157897 1587 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 00:33:28.158048 update_engine[1587]: I20260114 00:33:28.157907 1587 omaha_request_action.cc:617] Omaha request response: Jan 14 00:33:28.158048 update_engine[1587]: I20260114 00:33:28.157914 1587 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 00:33:28.158048 update_engine[1587]: I20260114 00:33:28.157920 1587 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 00:33:28.158048 update_engine[1587]: I20260114 00:33:28.157927 1587 update_attempter.cc:306] Processing Done. Jan 14 00:33:28.158048 update_engine[1587]: I20260114 00:33:28.157933 1587 update_attempter.cc:310] Error event sent. Jan 14 00:33:28.158048 update_engine[1587]: I20260114 00:33:28.157944 1587 update_check_scheduler.cc:74] Next update check in 48m41s Jan 14 00:33:28.158420 locksmithd[1633]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 14 00:33:29.771140 kubelet[2886]: E0114 00:33:29.771053 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d779df8f-4d6qc" podUID="ef34e9a3-c694-439f-b718-c0f1c8545835" Jan 14 00:33:30.192000 audit[5533]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=5533 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:33:30.192000 audit[5533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcd607140 a2=0 a3=1 items=0 ppid=3053 pid=5533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:30.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:33:30.203000 audit[5533]: NETFILTER_CFG table=nat:144 family=2 entries=104 op=nft_register_chain pid=5533 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:33:30.203000 audit[5533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffcd607140 a2=0 a3=1 items=0 ppid=3053 pid=5533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:30.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:33:31.773007 kubelet[2886]: E0114 00:33:31.772858 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:33:32.835697 kernel: kauditd_printk_skb: 17 callbacks suppressed Jan 14 00:33:32.835915 kernel: audit: type=1130 audit(1768350812.830:907): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-91.99.0.249:22-4.153.228.146:43960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:32.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-91.99.0.249:22-4.153.228.146:43960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:32.830935 systemd[1]: Started sshd@44-91.99.0.249:22-4.153.228.146:43960.service - OpenSSH per-connection server daemon (4.153.228.146:43960). Jan 14 00:33:33.443000 audit[5535]: USER_ACCT pid=5535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:33.447248 sshd[5535]: Accepted publickey for core from 4.153.228.146 port 43960 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:33:33.448858 kernel: audit: type=1101 audit(1768350813.443:908): pid=5535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:33.448000 audit[5535]: CRED_ACQ pid=5535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:33.450238 sshd-session[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:33:33.456060 kernel: audit: type=1103 audit(1768350813.448:909): pid=5535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:33.456174 kernel: audit: type=1006 audit(1768350813.448:910): pid=5535 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 00:33:33.458748 kernel: audit: type=1300 audit(1768350813.448:910): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc9096a70 a2=3 a3=0 items=0 ppid=1 pid=5535 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:33.448000 audit[5535]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc9096a70 a2=3 a3=0 items=0 ppid=1 pid=5535 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:33.460437 kernel: audit: type=1327 audit(1768350813.448:910): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:33.448000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:33.469162 systemd-logind[1580]: New session 19 of user core. Jan 14 00:33:33.476039 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 00:33:33.482000 audit[5535]: USER_START pid=5535 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:33.492250 kernel: audit: type=1105 audit(1768350813.482:911): pid=5535 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:33.492336 kernel: audit: type=1103 audit(1768350813.489:912): pid=5539 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:33.489000 audit[5539]: CRED_ACQ pid=5539 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:33.768164 kubelet[2886]: E0114 00:33:33.767958 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" podUID="c604fd3c-83a2-496b-a7f9-8f8a03e00409" Jan 14 00:33:33.876876 sshd[5539]: Connection closed by 4.153.228.146 port 43960 Jan 14 00:33:33.877718 sshd-session[5535]: pam_unix(sshd:session): session closed for user core Jan 14 00:33:33.880000 audit[5535]: USER_END pid=5535 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:33.880000 audit[5535]: CRED_DISP pid=5535 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:33.889622 kernel: audit: type=1106 audit(1768350813.880:913): pid=5535 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:33.889718 kernel: audit: type=1104 audit(1768350813.880:914): pid=5535 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:33.887867 systemd[1]: sshd@44-91.99.0.249:22-4.153.228.146:43960.service: Deactivated successfully. Jan 14 00:33:33.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-91.99.0.249:22-4.153.228.146:43960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:33.896652 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 00:33:33.900056 systemd-logind[1580]: Session 19 logged out. Waiting for processes to exit. Jan 14 00:33:33.903690 systemd-logind[1580]: Removed session 19. Jan 14 00:33:35.768848 kubelet[2886]: E0114 00:33:35.768195 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nsbsf" podUID="67055487-2b15-4e2a-8975-7fee787b4309" Jan 14 00:33:38.767784 kubelet[2886]: E0114 00:33:38.767445 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" podUID="e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a" Jan 14 00:33:38.767784 kubelet[2886]: E0114 00:33:38.767521 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" podUID="900e30be-5423-4b2c-9623-a51920c0a748" Jan 14 00:33:38.996981 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:33:38.997088 kernel: audit: type=1130 audit(1768350818.993:916): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-91.99.0.249:22-4.153.228.146:59406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:38.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-91.99.0.249:22-4.153.228.146:59406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:38.994458 systemd[1]: Started sshd@45-91.99.0.249:22-4.153.228.146:59406.service - OpenSSH per-connection server daemon (4.153.228.146:59406). Jan 14 00:33:39.579000 audit[5554]: USER_ACCT pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:39.580991 sshd[5554]: Accepted publickey for core from 4.153.228.146 port 59406 ssh2: RSA SHA256:G2BOS0iIRk9EQIJiUwTXMI6Ge/QGgk5HV0uKx8xVGik Jan 14 00:33:39.583000 audit[5554]: CRED_ACQ pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:39.585301 sshd-session[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:33:39.587328 kernel: audit: type=1101 audit(1768350819.579:917): pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:39.587374 kernel: audit: type=1103 audit(1768350819.583:918): pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:39.589738 kernel: audit: type=1006 audit(1768350819.583:919): pid=5554 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 14 00:33:39.583000 audit[5554]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffdc993b0 a2=3 a3=0 items=0 ppid=1 pid=5554 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:39.594181 kernel: audit: type=1300 audit(1768350819.583:919): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffdc993b0 a2=3 a3=0 items=0 ppid=1 pid=5554 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:39.583000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:39.598316 kernel: audit: type=1327 audit(1768350819.583:919): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:33:39.602421 systemd-logind[1580]: New session 20 of user core. Jan 14 00:33:39.609144 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 00:33:39.616000 audit[5554]: USER_START pid=5554 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:39.621000 audit[5558]: CRED_ACQ pid=5558 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:39.624328 kernel: audit: type=1105 audit(1768350819.616:920): pid=5554 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:39.624409 kernel: audit: type=1103 audit(1768350819.621:921): pid=5558 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:40.021256 sshd[5558]: Connection closed by 4.153.228.146 port 59406 Jan 14 00:33:40.021847 sshd-session[5554]: pam_unix(sshd:session): session closed for user core Jan 14 00:33:40.024000 audit[5554]: USER_END pid=5554 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:40.035204 kernel: audit: type=1106 audit(1768350820.024:922): pid=5554 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:40.035327 kernel: audit: type=1104 audit(1768350820.024:923): pid=5554 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:40.024000 audit[5554]: CRED_DISP pid=5554 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 00:33:40.036003 systemd[1]: sshd@45-91.99.0.249:22-4.153.228.146:59406.service: Deactivated successfully. Jan 14 00:33:40.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-91.99.0.249:22-4.153.228.146:59406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:33:40.042230 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 00:33:40.046973 systemd-logind[1580]: Session 20 logged out. Waiting for processes to exit. Jan 14 00:33:40.049526 systemd-logind[1580]: Removed session 20. Jan 14 00:33:41.772035 kubelet[2886]: E0114 00:33:41.771926 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d779df8f-4d6qc" podUID="ef34e9a3-c694-439f-b718-c0f1c8545835" Jan 14 00:33:45.772554 kubelet[2886]: E0114 00:33:45.772502 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c" Jan 14 00:33:46.768302 kubelet[2886]: E0114 00:33:46.768236 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nsbsf" podUID="67055487-2b15-4e2a-8975-7fee787b4309" Jan 14 00:33:47.768763 kubelet[2886]: E0114 00:33:47.768563 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cbc4f559-f6xkt" podUID="c604fd3c-83a2-496b-a7f9-8f8a03e00409" Jan 14 00:33:50.768952 kubelet[2886]: E0114 00:33:50.768548 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-jdxwl" podUID="900e30be-5423-4b2c-9623-a51920c0a748" Jan 14 00:33:52.767785 kubelet[2886]: E0114 00:33:52.767711 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b56c4d86d-wv9l2" podUID="e1fe6e4c-0c0d-49c4-b91f-2f3917a3c39a" Jan 14 00:33:53.772924 kubelet[2886]: E0114 00:33:53.772757 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d779df8f-4d6qc" podUID="ef34e9a3-c694-439f-b718-c0f1c8545835" Jan 14 00:33:55.029488 systemd[1]: cri-containerd-9845a028f70fe7bfbbddbc2cd1d62da79bf0df3f6be91847ea2701d0ad58d687.scope: Deactivated successfully. Jan 14 00:33:55.032068 systemd[1]: cri-containerd-9845a028f70fe7bfbbddbc2cd1d62da79bf0df3f6be91847ea2701d0ad58d687.scope: Consumed 39.445s CPU time, 113.9M memory peak. Jan 14 00:33:55.035486 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:33:55.035635 kernel: audit: type=1334 audit(1768350835.032:925): prog-id=144 op=UNLOAD Jan 14 00:33:55.032000 audit: BPF prog-id=144 op=UNLOAD Jan 14 00:33:55.039329 kernel: audit: type=1334 audit(1768350835.032:926): prog-id=148 op=UNLOAD Jan 14 00:33:55.032000 audit: BPF prog-id=148 op=UNLOAD Jan 14 00:33:55.039509 containerd[1612]: time="2026-01-14T00:33:55.036798543Z" level=info msg="received container exit event container_id:\"9845a028f70fe7bfbbddbc2cd1d62da79bf0df3f6be91847ea2701d0ad58d687\" id:\"9845a028f70fe7bfbbddbc2cd1d62da79bf0df3f6be91847ea2701d0ad58d687\" pid:3231 exit_status:1 exited_at:{seconds:1768350835 nanos:34344772}" Jan 14 00:33:55.077580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9845a028f70fe7bfbbddbc2cd1d62da79bf0df3f6be91847ea2701d0ad58d687-rootfs.mount: Deactivated successfully. Jan 14 00:33:55.491615 kubelet[2886]: E0114 00:33:55.488117 2886 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:39216->10.0.0.2:2379: read: connection timed out" Jan 14 00:33:55.770969 kubelet[2886]: I0114 00:33:55.770277 2886 scope.go:117] "RemoveContainer" containerID="9845a028f70fe7bfbbddbc2cd1d62da79bf0df3f6be91847ea2701d0ad58d687" Jan 14 00:33:55.782528 containerd[1612]: time="2026-01-14T00:33:55.782107715Z" level=info msg="CreateContainer within sandbox \"9baa613353c1749552e7472fef947d75a1fceb08132c0dfb09eb57748bd85cbd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 14 00:33:55.798205 containerd[1612]: time="2026-01-14T00:33:55.798097722Z" level=info msg="Container 4b801d7fdf593e1c3a7b59fd93a5c22c35f34df80d20e25d83f347cc59da3042: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:33:55.816893 containerd[1612]: time="2026-01-14T00:33:55.816059850Z" level=info msg="CreateContainer within sandbox \"9baa613353c1749552e7472fef947d75a1fceb08132c0dfb09eb57748bd85cbd\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"4b801d7fdf593e1c3a7b59fd93a5c22c35f34df80d20e25d83f347cc59da3042\"" Jan 14 00:33:55.818534 containerd[1612]: time="2026-01-14T00:33:55.818098011Z" level=info msg="StartContainer for \"4b801d7fdf593e1c3a7b59fd93a5c22c35f34df80d20e25d83f347cc59da3042\"" Jan 14 00:33:55.819555 containerd[1612]: time="2026-01-14T00:33:55.819465799Z" level=info msg="connecting to shim 4b801d7fdf593e1c3a7b59fd93a5c22c35f34df80d20e25d83f347cc59da3042" address="unix:///run/containerd/s/c1de1d4aec5203fb8e2adbff6cf5499c9e2aec30324f0a5d42e8ca52ad8f4033" protocol=ttrpc version=3 Jan 14 00:33:55.861169 systemd[1]: Started cri-containerd-4b801d7fdf593e1c3a7b59fd93a5c22c35f34df80d20e25d83f347cc59da3042.scope - libcontainer container 4b801d7fdf593e1c3a7b59fd93a5c22c35f34df80d20e25d83f347cc59da3042. Jan 14 00:33:55.881000 audit: BPF prog-id=254 op=LOAD Jan 14 00:33:55.883000 audit: BPF prog-id=255 op=LOAD Jan 14 00:33:55.886250 kernel: audit: type=1334 audit(1768350835.881:927): prog-id=254 op=LOAD Jan 14 00:33:55.886519 kernel: audit: type=1334 audit(1768350835.883:928): prog-id=255 op=LOAD Jan 14 00:33:55.883000 audit[5606]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2974 pid=5606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:55.889701 kernel: audit: type=1300 audit(1768350835.883:928): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2974 pid=5606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:55.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462383031643766646635393365316333613762353966643933613563 Jan 14 00:33:55.893174 kernel: audit: type=1327 audit(1768350835.883:928): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462383031643766646635393365316333613762353966643933613563 Jan 14 00:33:55.893424 kernel: audit: type=1334 audit(1768350835.884:929): prog-id=255 op=UNLOAD Jan 14 00:33:55.884000 audit: BPF prog-id=255 op=UNLOAD Jan 14 00:33:55.894638 kernel: audit: type=1300 audit(1768350835.884:929): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=5606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:55.884000 audit[5606]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=5606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:55.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462383031643766646635393365316333613762353966643933613563 Jan 14 00:33:55.899082 kernel: audit: type=1327 audit(1768350835.884:929): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462383031643766646635393365316333613762353966643933613563 Jan 14 00:33:55.885000 audit: BPF prog-id=256 op=LOAD Jan 14 00:33:55.900610 kernel: audit: type=1334 audit(1768350835.885:930): prog-id=256 op=LOAD Jan 14 00:33:55.885000 audit[5606]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2974 pid=5606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:55.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462383031643766646635393365316333613762353966643933613563 Jan 14 00:33:55.885000 audit: BPF prog-id=257 op=LOAD Jan 14 00:33:55.885000 audit[5606]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2974 pid=5606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:55.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462383031643766646635393365316333613762353966643933613563 Jan 14 00:33:55.889000 audit: BPF prog-id=257 op=UNLOAD Jan 14 00:33:55.889000 audit[5606]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=5606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:55.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462383031643766646635393365316333613762353966643933613563 Jan 14 00:33:55.889000 audit: BPF prog-id=256 op=UNLOAD Jan 14 00:33:55.889000 audit[5606]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=5606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:55.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462383031643766646635393365316333613762353966643933613563 Jan 14 00:33:55.889000 audit: BPF prog-id=258 op=LOAD Jan 14 00:33:55.889000 audit[5606]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2974 pid=5606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:55.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462383031643766646635393365316333613762353966643933613563 Jan 14 00:33:55.934592 containerd[1612]: time="2026-01-14T00:33:55.934511794Z" level=info msg="StartContainer for \"4b801d7fdf593e1c3a7b59fd93a5c22c35f34df80d20e25d83f347cc59da3042\" returns successfully" Jan 14 00:33:56.255901 kubelet[2886]: E0114 00:33:56.254370 2886 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38988->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{csi-node-driver-8rkfb.188a7194f00a96c1 calico-system 1681 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:csi-node-driver-8rkfb,UID:c4ec9a31-66c9-4bf7-a831-6c170af7211c,APIVersion:v1,ResourceVersion:767,FieldPath:spec.containers{calico-csi},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/csi:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4547-0-0-n-a43761813d,},FirstTimestamp:2026-01-14 00:31:28 +0000 UTC,LastTimestamp:2026-01-14 00:33:45.769353568 +0000 UTC m=+196.165449403,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-n-a43761813d,}" Jan 14 00:33:56.773551 systemd[1]: cri-containerd-0ec656bf17eff951feffd967e5927545d4cf9ec24b5f13d6c2e4b347262e93b4.scope: Deactivated successfully. Jan 14 00:33:56.774076 systemd[1]: cri-containerd-0ec656bf17eff951feffd967e5927545d4cf9ec24b5f13d6c2e4b347262e93b4.scope: Consumed 5.630s CPU time, 67.4M memory peak, 3.5M read from disk. Jan 14 00:33:56.777000 audit: BPF prog-id=259 op=LOAD Jan 14 00:33:56.777000 audit: BPF prog-id=86 op=UNLOAD Jan 14 00:33:56.778000 audit: BPF prog-id=101 op=UNLOAD Jan 14 00:33:56.778000 audit: BPF prog-id=105 op=UNLOAD Jan 14 00:33:56.780023 containerd[1612]: time="2026-01-14T00:33:56.779981446Z" level=info msg="received container exit event container_id:\"0ec656bf17eff951feffd967e5927545d4cf9ec24b5f13d6c2e4b347262e93b4\" id:\"0ec656bf17eff951feffd967e5927545d4cf9ec24b5f13d6c2e4b347262e93b4\" pid:2716 exit_status:1 exited_at:{seconds:1768350836 nanos:778536216}" Jan 14 00:33:56.814141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ec656bf17eff951feffd967e5927545d4cf9ec24b5f13d6c2e4b347262e93b4-rootfs.mount: Deactivated successfully. Jan 14 00:33:57.768671 kubelet[2886]: E0114 00:33:57.768603 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nsbsf" podUID="67055487-2b15-4e2a-8975-7fee787b4309" Jan 14 00:33:57.792594 kubelet[2886]: I0114 00:33:57.792229 2886 scope.go:117] "RemoveContainer" containerID="0ec656bf17eff951feffd967e5927545d4cf9ec24b5f13d6c2e4b347262e93b4" Jan 14 00:33:57.796166 containerd[1612]: time="2026-01-14T00:33:57.796108755Z" level=info msg="CreateContainer within sandbox \"d10aba203b91f3066d1ae28a0cb630b12ad3e182c90f4a3bb4ce2ee54ddb7708\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 14 00:33:57.815894 containerd[1612]: time="2026-01-14T00:33:57.814915570Z" level=info msg="Container 1858cc52167e651aacc2dc3ae35ab5746d43828278cc30e6d1c86a90a98f7794: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:33:57.827257 containerd[1612]: time="2026-01-14T00:33:57.826988491Z" level=info msg="CreateContainer within sandbox \"d10aba203b91f3066d1ae28a0cb630b12ad3e182c90f4a3bb4ce2ee54ddb7708\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1858cc52167e651aacc2dc3ae35ab5746d43828278cc30e6d1c86a90a98f7794\"" Jan 14 00:33:57.829092 containerd[1612]: time="2026-01-14T00:33:57.828117593Z" level=info msg="StartContainer for \"1858cc52167e651aacc2dc3ae35ab5746d43828278cc30e6d1c86a90a98f7794\"" Jan 14 00:33:57.830287 containerd[1612]: time="2026-01-14T00:33:57.830191435Z" level=info msg="connecting to shim 1858cc52167e651aacc2dc3ae35ab5746d43828278cc30e6d1c86a90a98f7794" address="unix:///run/containerd/s/fa4c3e249692259f9c4cc8a02af2d6e0e8eb4e3cab9ebfb95c96c41c1cc73ba2" protocol=ttrpc version=3 Jan 14 00:33:57.859123 systemd[1]: Started cri-containerd-1858cc52167e651aacc2dc3ae35ab5746d43828278cc30e6d1c86a90a98f7794.scope - libcontainer container 1858cc52167e651aacc2dc3ae35ab5746d43828278cc30e6d1c86a90a98f7794. Jan 14 00:33:57.879000 audit: BPF prog-id=260 op=LOAD Jan 14 00:33:57.880000 audit: BPF prog-id=261 op=LOAD Jan 14 00:33:57.880000 audit[5650]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2557 pid=5650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:57.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138353863633532313637653635316161636332646333616533356162 Jan 14 00:33:57.880000 audit: BPF prog-id=261 op=UNLOAD Jan 14 00:33:57.880000 audit[5650]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=5650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:57.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138353863633532313637653635316161636332646333616533356162 Jan 14 00:33:57.881000 audit: BPF prog-id=262 op=LOAD Jan 14 00:33:57.881000 audit[5650]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2557 pid=5650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:57.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138353863633532313637653635316161636332646333616533356162 Jan 14 00:33:57.882000 audit: BPF prog-id=263 op=LOAD Jan 14 00:33:57.882000 audit[5650]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2557 pid=5650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:57.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138353863633532313637653635316161636332646333616533356162 Jan 14 00:33:57.882000 audit: BPF prog-id=263 op=UNLOAD Jan 14 00:33:57.882000 audit[5650]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=5650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:57.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138353863633532313637653635316161636332646333616533356162 Jan 14 00:33:57.882000 audit: BPF prog-id=262 op=UNLOAD Jan 14 00:33:57.882000 audit[5650]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=5650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:57.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138353863633532313637653635316161636332646333616533356162 Jan 14 00:33:57.882000 audit: BPF prog-id=264 op=LOAD Jan 14 00:33:57.882000 audit[5650]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2557 pid=5650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:33:57.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138353863633532313637653635316161636332646333616533356162 Jan 14 00:33:57.916565 containerd[1612]: time="2026-01-14T00:33:57.916518835Z" level=info msg="StartContainer for \"1858cc52167e651aacc2dc3ae35ab5746d43828278cc30e6d1c86a90a98f7794\" returns successfully" Jan 14 00:33:58.769234 kubelet[2886]: E0114 00:33:58.769158 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rkfb" podUID="c4ec9a31-66c9-4bf7-a831-6c170af7211c"