Jan 16 18:03:29.875568 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 16 18:03:29.875616 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Jan 16 03:04:27 -00 2026 Jan 16 18:03:29.875642 kernel: KASLR disabled due to lack of seed Jan 16 18:03:29.875659 kernel: efi: EFI v2.7 by EDK II Jan 16 18:03:29.875675 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78557598 Jan 16 18:03:29.875691 kernel: secureboot: Secure boot disabled Jan 16 18:03:29.875709 kernel: ACPI: Early table checksum verification disabled Jan 16 18:03:29.875725 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 16 18:03:29.875741 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 16 18:03:29.875761 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 16 18:03:29.875778 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 16 18:03:29.875793 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 16 18:03:29.875809 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 16 18:03:29.875826 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 16 18:03:29.875849 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 16 18:03:29.875897 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 16 18:03:29.875920 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 16 18:03:29.875938 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 16 18:03:29.875955 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 16 18:03:29.875972 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 16 18:03:29.875990 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 16 18:03:29.876007 kernel: printk: legacy bootconsole [uart0] enabled Jan 16 18:03:29.876024 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 16 18:03:29.876042 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 16 18:03:29.876064 kernel: NODE_DATA(0) allocated [mem 0x4b584ea00-0x4b5855fff] Jan 16 18:03:29.876082 kernel: Zone ranges: Jan 16 18:03:29.876099 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 16 18:03:29.876203 kernel: DMA32 empty Jan 16 18:03:29.876225 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 16 18:03:29.876243 kernel: Device empty Jan 16 18:03:29.876259 kernel: Movable zone start for each node Jan 16 18:03:29.876276 kernel: Early memory node ranges Jan 16 18:03:29.876294 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 16 18:03:29.876311 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 16 18:03:29.876327 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 16 18:03:29.876344 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 16 18:03:29.876369 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 16 18:03:29.876386 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 16 18:03:29.876403 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 16 18:03:29.876420 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 16 18:03:29.876444 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 16 18:03:29.876466 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 16 18:03:29.876484 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 16 18:03:29.876502 kernel: psci: probing for conduit method from ACPI. Jan 16 18:03:29.876520 kernel: psci: PSCIv1.0 detected in firmware. Jan 16 18:03:29.876537 kernel: psci: Using standard PSCI v0.2 function IDs Jan 16 18:03:29.876555 kernel: psci: Trusted OS migration not required Jan 16 18:03:29.876572 kernel: psci: SMC Calling Convention v1.1 Jan 16 18:03:29.876591 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 16 18:03:29.876608 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 16 18:03:29.876630 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 16 18:03:29.876649 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 16 18:03:29.876666 kernel: Detected PIPT I-cache on CPU0 Jan 16 18:03:29.876684 kernel: CPU features: detected: GIC system register CPU interface Jan 16 18:03:29.876702 kernel: CPU features: detected: Spectre-v2 Jan 16 18:03:29.876720 kernel: CPU features: detected: Spectre-v3a Jan 16 18:03:29.876737 kernel: CPU features: detected: Spectre-BHB Jan 16 18:03:29.876755 kernel: CPU features: detected: ARM erratum 1742098 Jan 16 18:03:29.876773 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 16 18:03:29.876790 kernel: alternatives: applying boot alternatives Jan 16 18:03:29.876811 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=924fb3eb04ba1d8edcb66284d30e3342855b0579b62556e7722bcf37e82bda13 Jan 16 18:03:29.876833 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 16 18:03:29.876851 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 18:03:29.876869 kernel: Fallback order for Node 0: 0 Jan 16 18:03:29.876886 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 16 18:03:29.876904 kernel: Policy zone: Normal Jan 16 18:03:29.876922 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 18:03:29.876939 kernel: software IO TLB: area num 2. Jan 16 18:03:29.876957 kernel: software IO TLB: mapped [mem 0x000000006f800000-0x0000000073800000] (64MB) Jan 16 18:03:29.876975 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 18:03:29.876993 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 18:03:29.877016 kernel: rcu: RCU event tracing is enabled. Jan 16 18:03:29.877034 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 18:03:29.877052 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 18:03:29.877070 kernel: Tracing variant of Tasks RCU enabled. Jan 16 18:03:29.877088 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 18:03:29.877106 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 18:03:29.877159 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 18:03:29.877179 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 18:03:29.877197 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 16 18:03:29.877215 kernel: GICv3: 96 SPIs implemented Jan 16 18:03:29.877233 kernel: GICv3: 0 Extended SPIs implemented Jan 16 18:03:29.877257 kernel: Root IRQ handler: gic_handle_irq Jan 16 18:03:29.877275 kernel: GICv3: GICv3 features: 16 PPIs Jan 16 18:03:29.877293 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 16 18:03:29.877311 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 16 18:03:29.877329 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 16 18:03:29.877446 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 16 18:03:29.877810 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 16 18:03:29.877835 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 16 18:03:29.878236 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 16 18:03:29.878622 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 16 18:03:29.878999 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 18:03:29.879395 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 16 18:03:29.879798 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 16 18:03:29.879832 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 16 18:03:29.879855 kernel: Console: colour dummy device 80x25 Jan 16 18:03:29.879876 kernel: printk: legacy console [tty1] enabled Jan 16 18:03:29.879895 kernel: ACPI: Core revision 20240827 Jan 16 18:03:29.879917 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 16 18:03:29.879936 kernel: pid_max: default: 32768 minimum: 301 Jan 16 18:03:29.879961 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 16 18:03:29.879981 kernel: landlock: Up and running. Jan 16 18:03:29.880000 kernel: SELinux: Initializing. Jan 16 18:03:29.880019 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 18:03:29.880039 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 18:03:29.880058 kernel: rcu: Hierarchical SRCU implementation. Jan 16 18:03:29.880078 kernel: rcu: Max phase no-delay instances is 400. Jan 16 18:03:29.880097 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 16 18:03:29.880142 kernel: Remapping and enabling EFI services. Jan 16 18:03:29.880163 kernel: smp: Bringing up secondary CPUs ... Jan 16 18:03:29.880182 kernel: Detected PIPT I-cache on CPU1 Jan 16 18:03:29.880201 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 16 18:03:29.880220 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 16 18:03:29.880239 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 16 18:03:29.880257 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 18:03:29.880283 kernel: SMP: Total of 2 processors activated. Jan 16 18:03:29.880302 kernel: CPU: All CPU(s) started at EL1 Jan 16 18:03:29.880332 kernel: CPU features: detected: 32-bit EL0 Support Jan 16 18:03:29.880355 kernel: CPU features: detected: 32-bit EL1 Support Jan 16 18:03:29.880374 kernel: CPU features: detected: CRC32 instructions Jan 16 18:03:29.880393 kernel: alternatives: applying system-wide alternatives Jan 16 18:03:29.880415 kernel: Memory: 3823340K/4030464K available (11200K kernel code, 2458K rwdata, 9092K rodata, 12480K init, 1038K bss, 185776K reserved, 16384K cma-reserved) Jan 16 18:03:29.880434 kernel: devtmpfs: initialized Jan 16 18:03:29.880458 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 18:03:29.880478 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 18:03:29.880497 kernel: 23632 pages in range for non-PLT usage Jan 16 18:03:29.880516 kernel: 515152 pages in range for PLT usage Jan 16 18:03:29.880535 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 18:03:29.880558 kernel: SMBIOS 3.0.0 present. Jan 16 18:03:29.880578 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 16 18:03:29.880597 kernel: DMI: Memory slots populated: 0/0 Jan 16 18:03:29.880616 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 18:03:29.880636 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 16 18:03:29.880656 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 16 18:03:29.880675 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 16 18:03:29.880698 kernel: audit: initializing netlink subsys (disabled) Jan 16 18:03:29.880718 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Jan 16 18:03:29.880737 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 18:03:29.880756 kernel: cpuidle: using governor menu Jan 16 18:03:29.880775 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 16 18:03:29.880794 kernel: ASID allocator initialised with 65536 entries Jan 16 18:03:29.880813 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 18:03:29.880836 kernel: Serial: AMBA PL011 UART driver Jan 16 18:03:29.880855 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 16 18:03:29.880874 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 16 18:03:29.880894 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 16 18:03:29.880913 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 16 18:03:29.880932 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 18:03:29.880951 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 18:03:29.880974 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 16 18:03:29.880994 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 16 18:03:29.881013 kernel: ACPI: Added _OSI(Module Device) Jan 16 18:03:29.881032 kernel: ACPI: Added _OSI(Processor Device) Jan 16 18:03:29.881051 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 18:03:29.881070 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 18:03:29.881089 kernel: ACPI: Interpreter enabled Jan 16 18:03:29.885431 kernel: ACPI: Using GIC for interrupt routing Jan 16 18:03:29.885482 kernel: ACPI: MCFG table detected, 1 entries Jan 16 18:03:29.885503 kernel: ACPI: CPU0 has been hot-added Jan 16 18:03:29.885522 kernel: ACPI: CPU1 has been hot-added Jan 16 18:03:29.885542 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 16 18:03:29.885962 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 16 18:03:29.886258 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 16 18:03:29.886530 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 16 18:03:29.886786 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 16 18:03:29.887041 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 16 18:03:29.887067 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 16 18:03:29.887086 kernel: acpiphp: Slot [1] registered Jan 16 18:03:29.887106 kernel: acpiphp: Slot [2] registered Jan 16 18:03:29.887168 kernel: acpiphp: Slot [3] registered Jan 16 18:03:29.887188 kernel: acpiphp: Slot [4] registered Jan 16 18:03:29.887207 kernel: acpiphp: Slot [5] registered Jan 16 18:03:29.887226 kernel: acpiphp: Slot [6] registered Jan 16 18:03:29.887246 kernel: acpiphp: Slot [7] registered Jan 16 18:03:29.887265 kernel: acpiphp: Slot [8] registered Jan 16 18:03:29.887284 kernel: acpiphp: Slot [9] registered Jan 16 18:03:29.887303 kernel: acpiphp: Slot [10] registered Jan 16 18:03:29.887327 kernel: acpiphp: Slot [11] registered Jan 16 18:03:29.887346 kernel: acpiphp: Slot [12] registered Jan 16 18:03:29.887365 kernel: acpiphp: Slot [13] registered Jan 16 18:03:29.887384 kernel: acpiphp: Slot [14] registered Jan 16 18:03:29.887404 kernel: acpiphp: Slot [15] registered Jan 16 18:03:29.887423 kernel: acpiphp: Slot [16] registered Jan 16 18:03:29.887443 kernel: acpiphp: Slot [17] registered Jan 16 18:03:29.887466 kernel: acpiphp: Slot [18] registered Jan 16 18:03:29.887485 kernel: acpiphp: Slot [19] registered Jan 16 18:03:29.887504 kernel: acpiphp: Slot [20] registered Jan 16 18:03:29.887523 kernel: acpiphp: Slot [21] registered Jan 16 18:03:29.887542 kernel: acpiphp: Slot [22] registered Jan 16 18:03:29.887562 kernel: acpiphp: Slot [23] registered Jan 16 18:03:29.887581 kernel: acpiphp: Slot [24] registered Jan 16 18:03:29.887604 kernel: acpiphp: Slot [25] registered Jan 16 18:03:29.887623 kernel: acpiphp: Slot [26] registered Jan 16 18:03:29.887643 kernel: acpiphp: Slot [27] registered Jan 16 18:03:29.887661 kernel: acpiphp: Slot [28] registered Jan 16 18:03:29.887681 kernel: acpiphp: Slot [29] registered Jan 16 18:03:29.887700 kernel: acpiphp: Slot [30] registered Jan 16 18:03:29.887719 kernel: acpiphp: Slot [31] registered Jan 16 18:03:29.887738 kernel: PCI host bridge to bus 0000:00 Jan 16 18:03:29.888008 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 16 18:03:29.888291 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 16 18:03:29.888527 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 16 18:03:29.888759 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 16 18:03:29.889061 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 16 18:03:29.889398 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 16 18:03:29.889661 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 16 18:03:29.889967 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 16 18:03:29.890281 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 16 18:03:29.890552 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 16 18:03:29.892382 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 16 18:03:29.892647 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 16 18:03:29.892902 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 16 18:03:29.893180 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 16 18:03:29.893442 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 16 18:03:29.893698 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 16 18:03:29.893982 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 16 18:03:29.894269 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 16 18:03:29.894297 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 16 18:03:29.894317 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 16 18:03:29.894338 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 16 18:03:29.894358 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 16 18:03:29.894377 kernel: iommu: Default domain type: Translated Jan 16 18:03:29.894404 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 16 18:03:29.894424 kernel: efivars: Registered efivars operations Jan 16 18:03:29.894444 kernel: vgaarb: loaded Jan 16 18:03:29.894463 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 16 18:03:29.894483 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 18:03:29.894502 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 18:03:29.894522 kernel: pnp: PnP ACPI init Jan 16 18:03:29.894807 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 16 18:03:29.894836 kernel: pnp: PnP ACPI: found 1 devices Jan 16 18:03:29.894856 kernel: NET: Registered PF_INET protocol family Jan 16 18:03:29.894876 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 16 18:03:29.894897 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 16 18:03:29.894916 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 18:03:29.894936 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 18:03:29.894962 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 16 18:03:29.894981 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 16 18:03:29.895001 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 18:03:29.895021 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 18:03:29.895040 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 18:03:29.895059 kernel: PCI: CLS 0 bytes, default 64 Jan 16 18:03:29.895079 kernel: kvm [1]: HYP mode not available Jan 16 18:03:29.895102 kernel: Initialise system trusted keyrings Jan 16 18:03:29.895146 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 16 18:03:29.895168 kernel: Key type asymmetric registered Jan 16 18:03:29.895188 kernel: Asymmetric key parser 'x509' registered Jan 16 18:03:29.895207 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 16 18:03:29.895227 kernel: io scheduler mq-deadline registered Jan 16 18:03:29.895246 kernel: io scheduler kyber registered Jan 16 18:03:29.895271 kernel: io scheduler bfq registered Jan 16 18:03:29.895566 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 16 18:03:29.895594 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 16 18:03:29.895614 kernel: ACPI: button: Power Button [PWRB] Jan 16 18:03:29.895634 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 16 18:03:29.895654 kernel: ACPI: button: Sleep Button [SLPB] Jan 16 18:03:29.895678 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 18:03:29.895699 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 16 18:03:29.895959 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 16 18:03:29.895986 kernel: printk: legacy console [ttyS0] disabled Jan 16 18:03:29.896006 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 16 18:03:29.896026 kernel: printk: legacy console [ttyS0] enabled Jan 16 18:03:29.896045 kernel: printk: legacy bootconsole [uart0] disabled Jan 16 18:03:29.896070 kernel: thunder_xcv, ver 1.0 Jan 16 18:03:29.896090 kernel: thunder_bgx, ver 1.0 Jan 16 18:03:29.896109 kernel: nicpf, ver 1.0 Jan 16 18:03:29.896150 kernel: nicvf, ver 1.0 Jan 16 18:03:29.896447 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 16 18:03:29.896692 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-16T18:03:26 UTC (1768586606) Jan 16 18:03:29.896718 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 16 18:03:29.896744 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 16 18:03:29.896764 kernel: NET: Registered PF_INET6 protocol family Jan 16 18:03:29.896783 kernel: watchdog: NMI not fully supported Jan 16 18:03:29.896802 kernel: watchdog: Hard watchdog permanently disabled Jan 16 18:03:29.896822 kernel: Segment Routing with IPv6 Jan 16 18:03:29.896841 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 18:03:29.896860 kernel: NET: Registered PF_PACKET protocol family Jan 16 18:03:29.896884 kernel: Key type dns_resolver registered Jan 16 18:03:29.896903 kernel: registered taskstats version 1 Jan 16 18:03:29.896922 kernel: Loading compiled-in X.509 certificates Jan 16 18:03:29.896941 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 27e3aa638f3535434dc9dbdde4239fca944d5458' Jan 16 18:03:29.896961 kernel: Demotion targets for Node 0: null Jan 16 18:03:29.896980 kernel: Key type .fscrypt registered Jan 16 18:03:29.896999 kernel: Key type fscrypt-provisioning registered Jan 16 18:03:29.897022 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 18:03:29.897041 kernel: ima: Allocated hash algorithm: sha1 Jan 16 18:03:29.897060 kernel: ima: No architecture policies found Jan 16 18:03:29.897079 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 16 18:03:29.897099 kernel: clk: Disabling unused clocks Jan 16 18:03:29.897139 kernel: PM: genpd: Disabling unused power domains Jan 16 18:03:29.897161 kernel: Freeing unused kernel memory: 12480K Jan 16 18:03:29.897180 kernel: Run /init as init process Jan 16 18:03:29.897205 kernel: with arguments: Jan 16 18:03:29.897225 kernel: /init Jan 16 18:03:29.897243 kernel: with environment: Jan 16 18:03:29.897262 kernel: HOME=/ Jan 16 18:03:29.897281 kernel: TERM=linux Jan 16 18:03:29.897300 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 16 18:03:29.897519 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 16 18:03:29.897717 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 16 18:03:29.897744 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 18:03:29.897763 kernel: GPT:25804799 != 33554431 Jan 16 18:03:29.897782 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 18:03:29.897800 kernel: GPT:25804799 != 33554431 Jan 16 18:03:29.897819 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 18:03:29.897843 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 16 18:03:29.897918 kernel: SCSI subsystem initialized Jan 16 18:03:29.897968 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 18:03:29.897994 kernel: device-mapper: uevent: version 1.0.3 Jan 16 18:03:29.898014 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 16 18:03:29.898034 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 16 18:03:29.898054 kernel: raid6: neonx8 gen() 6631 MB/s Jan 16 18:03:29.898079 kernel: raid6: neonx4 gen() 6603 MB/s Jan 16 18:03:29.898098 kernel: raid6: neonx2 gen() 5477 MB/s Jan 16 18:03:29.898140 kernel: raid6: neonx1 gen() 3968 MB/s Jan 16 18:03:29.898164 kernel: raid6: int64x8 gen() 3663 MB/s Jan 16 18:03:29.898183 kernel: raid6: int64x4 gen() 3719 MB/s Jan 16 18:03:29.898204 kernel: raid6: int64x2 gen() 3622 MB/s Jan 16 18:03:29.902732 kernel: raid6: int64x1 gen() 2758 MB/s Jan 16 18:03:29.902776 kernel: raid6: using algorithm neonx8 gen() 6631 MB/s Jan 16 18:03:29.902796 kernel: raid6: .... xor() 4637 MB/s, rmw enabled Jan 16 18:03:29.902816 kernel: raid6: using neon recovery algorithm Jan 16 18:03:29.902836 kernel: xor: measuring software checksum speed Jan 16 18:03:29.902856 kernel: 8regs : 12930 MB/sec Jan 16 18:03:29.902875 kernel: 32regs : 12465 MB/sec Jan 16 18:03:29.902895 kernel: arm64_neon : 8916 MB/sec Jan 16 18:03:29.902919 kernel: xor: using function: 8regs (12930 MB/sec) Jan 16 18:03:29.902939 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 18:03:29.902960 kernel: BTRFS: device fsid 772c9e2d-7e98-4acf-842c-b5416fff0f38 devid 1 transid 34 /dev/mapper/usr (254:0) scanned by mount (221) Jan 16 18:03:29.902980 kernel: BTRFS info (device dm-0): first mount of filesystem 772c9e2d-7e98-4acf-842c-b5416fff0f38 Jan 16 18:03:29.903000 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 16 18:03:29.903020 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 16 18:03:29.903039 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 18:03:29.903063 kernel: BTRFS info (device dm-0): enabling free space tree Jan 16 18:03:29.903083 kernel: loop: module loaded Jan 16 18:03:29.903103 kernel: loop0: detected capacity change from 0 to 91832 Jan 16 18:03:29.903145 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 18:03:29.903171 systemd[1]: Successfully made /usr/ read-only. Jan 16 18:03:29.903198 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 16 18:03:29.903227 systemd[1]: Detected virtualization amazon. Jan 16 18:03:29.903248 systemd[1]: Detected architecture arm64. Jan 16 18:03:29.903269 systemd[1]: Running in initrd. Jan 16 18:03:29.903290 systemd[1]: No hostname configured, using default hostname. Jan 16 18:03:29.903312 systemd[1]: Hostname set to . Jan 16 18:03:29.903332 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 16 18:03:29.903353 systemd[1]: Queued start job for default target initrd.target. Jan 16 18:03:29.903378 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 16 18:03:29.903399 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 18:03:29.903419 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 18:03:29.903442 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 18:03:29.903465 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 18:03:29.903507 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 18:03:29.903530 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 18:03:29.903552 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 18:03:29.903573 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 18:03:29.903595 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 16 18:03:29.903620 systemd[1]: Reached target paths.target - Path Units. Jan 16 18:03:29.903641 systemd[1]: Reached target slices.target - Slice Units. Jan 16 18:03:29.903662 systemd[1]: Reached target swap.target - Swaps. Jan 16 18:03:29.903683 systemd[1]: Reached target timers.target - Timer Units. Jan 16 18:03:29.903704 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 18:03:29.903725 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 18:03:29.903746 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 16 18:03:29.903771 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 18:03:29.903793 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 16 18:03:29.903814 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 18:03:29.903836 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 18:03:29.903857 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 18:03:29.903878 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 18:03:29.903900 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 18:03:29.903926 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 18:03:29.903947 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 18:03:29.903969 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 18:03:29.903991 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 16 18:03:29.904013 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 18:03:29.904034 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 18:03:29.904055 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 18:03:29.904082 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 18:03:29.904104 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 18:03:29.904299 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 18:03:29.904322 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 18:03:29.904345 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 18:03:29.904366 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 18:03:29.904387 kernel: Bridge firewalling registered Jan 16 18:03:29.904487 systemd-journald[360]: Collecting audit messages is enabled. Jan 16 18:03:29.904534 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 18:03:29.904557 kernel: audit: type=1130 audit(1768586609.878:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:29.904584 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 18:03:29.904606 kernel: audit: type=1130 audit(1768586609.897:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:29.904627 systemd-journald[360]: Journal started Jan 16 18:03:29.904666 systemd-journald[360]: Runtime Journal (/run/log/journal/ec2982262f99144e60a3fd9e2b69a3a8) is 8M, max 75.3M, 67.3M free. Jan 16 18:03:29.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:29.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:29.870256 systemd-modules-load[361]: Inserted module 'br_netfilter' Jan 16 18:03:29.911165 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 18:03:29.920568 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 18:03:29.925021 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 18:03:29.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:29.931171 kernel: audit: type=1130 audit(1768586609.924:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:29.931441 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 18:03:29.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:29.946814 kernel: audit: type=1130 audit(1768586609.933:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:29.949023 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 18:03:29.960569 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 18:03:29.983076 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 18:03:29.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:29.999241 kernel: audit: type=1130 audit(1768586609.987:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:30.002424 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 18:03:30.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:30.010000 audit: BPF prog-id=6 op=LOAD Jan 16 18:03:30.013045 kernel: audit: type=1130 audit(1768586610.006:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:30.013110 kernel: audit: type=1334 audit(1768586610.010:8): prog-id=6 op=LOAD Jan 16 18:03:30.013457 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 18:03:30.033727 systemd-tmpfiles[382]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 16 18:03:30.044853 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 18:03:30.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:30.056674 kernel: audit: type=1130 audit(1768586610.047:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:30.073929 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 18:03:30.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:30.092152 kernel: audit: type=1130 audit(1768586610.083:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:30.092469 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 18:03:30.175637 systemd-resolved[387]: Positive Trust Anchors: Jan 16 18:03:30.175672 systemd-resolved[387]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 18:03:30.194365 dracut-cmdline[400]: dracut-109 Jan 16 18:03:30.175681 systemd-resolved[387]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 16 18:03:30.175740 systemd-resolved[387]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 18:03:30.247505 dracut-cmdline[400]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=924fb3eb04ba1d8edcb66284d30e3342855b0579b62556e7722bcf37e82bda13 Jan 16 18:03:30.467182 kernel: random: crng init done Jan 16 18:03:30.475889 systemd-resolved[387]: Defaulting to hostname 'linux'. Jan 16 18:03:30.479236 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 18:03:30.485192 kernel: Loading iSCSI transport class v2.0-870. Jan 16 18:03:30.485289 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 18:03:30.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:30.497160 kernel: audit: type=1130 audit(1768586610.484:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:30.550157 kernel: iscsi: registered transport (tcp) Jan 16 18:03:30.601174 kernel: iscsi: registered transport (qla4xxx) Jan 16 18:03:30.601248 kernel: QLogic iSCSI HBA Driver Jan 16 18:03:30.640611 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 18:03:30.680642 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 18:03:30.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:30.689591 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 18:03:30.696586 kernel: audit: type=1130 audit(1768586610.685:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:30.776716 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 18:03:30.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:30.783891 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 18:03:30.795510 kernel: audit: type=1130 audit(1768586610.780:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:30.797347 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 18:03:30.846965 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 18:03:30.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:30.856000 audit: BPF prog-id=7 op=LOAD Jan 16 18:03:30.857000 audit: BPF prog-id=8 op=LOAD Jan 16 18:03:30.859184 kernel: audit: type=1130 audit(1768586610.848:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:30.859873 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 18:03:30.928821 systemd-udevd[628]: Using default interface naming scheme 'v257'. Jan 16 18:03:30.950643 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 18:03:30.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:30.960353 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 18:03:31.018895 dracut-pre-trigger[691]: rd.md=0: removing MD RAID activation Jan 16 18:03:31.045275 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 18:03:31.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:31.051000 audit: BPF prog-id=9 op=LOAD Jan 16 18:03:31.053778 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 18:03:31.085031 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 18:03:31.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:31.094171 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 18:03:31.155993 systemd-networkd[757]: lo: Link UP Jan 16 18:03:31.156013 systemd-networkd[757]: lo: Gained carrier Jan 16 18:03:31.162174 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 18:03:31.167233 systemd[1]: Reached target network.target - Network. Jan 16 18:03:31.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:31.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:31.255675 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 18:03:31.260410 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 18:03:31.470072 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 18:03:31.470783 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 18:03:31.477594 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 16 18:03:31.475659 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 18:03:31.492136 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 16 18:03:31.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:31.492035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 18:03:31.501782 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 16 18:03:31.502264 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 16 18:03:31.512154 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:96:b3:f6:c2:db Jan 16 18:03:31.518008 (udev-worker)[796]: Network interface NamePolicy= disabled on kernel command line. Jan 16 18:03:31.526837 kernel: nvme nvme0: using unchecked data buffer Jan 16 18:03:31.537498 systemd-networkd[757]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 18:03:31.542534 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 18:03:31.556919 systemd-networkd[757]: eth0: Link UP Jan 16 18:03:31.560996 systemd-networkd[757]: eth0: Gained carrier Jan 16 18:03:31.561027 systemd-networkd[757]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 18:03:31.584244 systemd-networkd[757]: eth0: DHCPv4 address 172.31.22.249/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 16 18:03:31.589783 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 18:03:31.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:31.695091 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 16 18:03:31.719983 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 16 18:03:31.726166 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 18:03:31.765987 disk-uuid[864]: Primary Header is updated. Jan 16 18:03:31.765987 disk-uuid[864]: Secondary Entries is updated. Jan 16 18:03:31.765987 disk-uuid[864]: Secondary Header is updated. Jan 16 18:03:31.865915 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 16 18:03:31.895655 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 16 18:03:32.243210 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 18:03:32.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:32.247020 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 18:03:32.251258 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 18:03:32.260874 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 18:03:32.269472 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 18:03:32.321989 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 18:03:32.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:32.825129 disk-uuid[869]: Warning: The kernel is still using the old partition table. Jan 16 18:03:32.825129 disk-uuid[869]: The new table will be used at the next reboot or after you Jan 16 18:03:32.825129 disk-uuid[869]: run partprobe(8) or kpartx(8) Jan 16 18:03:32.825129 disk-uuid[869]: The operation has completed successfully. Jan 16 18:03:32.841589 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 18:03:32.842672 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 18:03:32.850348 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 18:03:32.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:32.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:32.911938 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1089) Jan 16 18:03:32.912023 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5e96ab2e-f088-4ca2-ba97-55451a1893dc Jan 16 18:03:32.912061 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 16 18:03:32.955885 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 16 18:03:32.955981 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 16 18:03:32.966229 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5e96ab2e-f088-4ca2-ba97-55451a1893dc Jan 16 18:03:32.968156 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 18:03:32.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:32.972931 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 18:03:33.195912 ignition[1108]: Ignition 2.24.0 Jan 16 18:03:33.195943 ignition[1108]: Stage: fetch-offline Jan 16 18:03:33.196381 ignition[1108]: no configs at "/usr/lib/ignition/base.d" Jan 16 18:03:33.196411 ignition[1108]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 18:03:33.199741 ignition[1108]: Ignition finished successfully Jan 16 18:03:33.206316 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 18:03:33.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:33.213822 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 18:03:33.252540 ignition[1118]: Ignition 2.24.0 Jan 16 18:03:33.252573 ignition[1118]: Stage: fetch Jan 16 18:03:33.252896 ignition[1118]: no configs at "/usr/lib/ignition/base.d" Jan 16 18:03:33.252921 ignition[1118]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 18:03:33.253518 ignition[1118]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 18:03:33.273270 ignition[1118]: PUT result: OK Jan 16 18:03:33.276854 ignition[1118]: parsed url from cmdline: "" Jan 16 18:03:33.276878 ignition[1118]: no config URL provided Jan 16 18:03:33.276897 ignition[1118]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 18:03:33.276932 ignition[1118]: no config at "/usr/lib/ignition/user.ign" Jan 16 18:03:33.276981 ignition[1118]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 18:03:33.281539 ignition[1118]: PUT result: OK Jan 16 18:03:33.281641 ignition[1118]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 16 18:03:33.287836 ignition[1118]: GET result: OK Jan 16 18:03:33.287999 ignition[1118]: parsing config with SHA512: 15ca95bda3e9f225df419ebb0b6b4695c1522ab5088eb98a3901dd98ca7ee0081672e1613e322fb032fd466302f07e31d7746efb0acc4ffcfe7535f734b3a31b Jan 16 18:03:33.302041 unknown[1118]: fetched base config from "system" Jan 16 18:03:33.302329 unknown[1118]: fetched base config from "system" Jan 16 18:03:33.302999 ignition[1118]: fetch: fetch complete Jan 16 18:03:33.302344 unknown[1118]: fetched user config from "aws" Jan 16 18:03:33.303011 ignition[1118]: fetch: fetch passed Jan 16 18:03:33.303157 ignition[1118]: Ignition finished successfully Jan 16 18:03:33.317204 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 18:03:33.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:33.322727 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 18:03:33.375598 ignition[1125]: Ignition 2.24.0 Jan 16 18:03:33.375630 ignition[1125]: Stage: kargs Jan 16 18:03:33.376002 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Jan 16 18:03:33.376024 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 18:03:33.377343 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 18:03:33.381496 ignition[1125]: PUT result: OK Jan 16 18:03:33.393653 ignition[1125]: kargs: kargs passed Jan 16 18:03:33.393777 ignition[1125]: Ignition finished successfully Jan 16 18:03:33.399065 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 18:03:33.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:33.406274 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 18:03:33.443529 ignition[1131]: Ignition 2.24.0 Jan 16 18:03:33.444033 ignition[1131]: Stage: disks Jan 16 18:03:33.444462 ignition[1131]: no configs at "/usr/lib/ignition/base.d" Jan 16 18:03:33.444485 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 18:03:33.444641 ignition[1131]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 18:03:33.454178 ignition[1131]: PUT result: OK Jan 16 18:03:33.462748 ignition[1131]: disks: disks passed Jan 16 18:03:33.462868 ignition[1131]: Ignition finished successfully Jan 16 18:03:33.469187 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 18:03:33.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:33.470054 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 18:03:33.470564 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 18:03:33.470939 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 18:03:33.471745 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 18:03:33.472061 systemd[1]: Reached target basic.target - Basic System. Jan 16 18:03:33.481213 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 18:03:33.512303 systemd-networkd[757]: eth0: Gained IPv6LL Jan 16 18:03:33.611324 systemd-fsck[1141]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 16 18:03:33.615883 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 18:03:33.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:33.620795 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 18:03:33.764170 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 3360ad79-d1e3-4f32-ae7d-4a8c0a3c719d r/w with ordered data mode. Quota mode: none. Jan 16 18:03:33.764362 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 18:03:33.768535 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 18:03:33.838779 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 18:03:33.859611 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 18:03:33.866914 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 16 18:03:33.867632 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 18:03:33.867688 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 18:03:33.890588 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 18:03:33.897370 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 18:03:33.907214 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1160) Jan 16 18:03:33.911146 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5e96ab2e-f088-4ca2-ba97-55451a1893dc Jan 16 18:03:33.911192 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 16 18:03:33.919499 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 16 18:03:33.919565 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 16 18:03:33.922892 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 18:03:34.315266 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 18:03:34.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:34.320469 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 18:03:34.325461 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 18:03:34.360269 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 18:03:34.366012 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5e96ab2e-f088-4ca2-ba97-55451a1893dc Jan 16 18:03:34.402530 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 18:03:34.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:34.417039 ignition[1256]: INFO : Ignition 2.24.0 Jan 16 18:03:34.419245 ignition[1256]: INFO : Stage: mount Jan 16 18:03:34.421294 ignition[1256]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 18:03:34.421294 ignition[1256]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 18:03:34.426330 ignition[1256]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 18:03:34.429663 ignition[1256]: INFO : PUT result: OK Jan 16 18:03:34.437773 ignition[1256]: INFO : mount: mount passed Jan 16 18:03:34.437773 ignition[1256]: INFO : Ignition finished successfully Jan 16 18:03:34.441136 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 18:03:34.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:34.448666 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 18:03:34.478418 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 18:03:34.517170 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1268) Jan 16 18:03:34.517234 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5e96ab2e-f088-4ca2-ba97-55451a1893dc Jan 16 18:03:34.520952 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 16 18:03:34.529434 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 16 18:03:34.529511 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 16 18:03:34.533456 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 18:03:34.576864 ignition[1286]: INFO : Ignition 2.24.0 Jan 16 18:03:34.579011 ignition[1286]: INFO : Stage: files Jan 16 18:03:34.579011 ignition[1286]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 18:03:34.579011 ignition[1286]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 18:03:34.579011 ignition[1286]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 18:03:34.590471 ignition[1286]: INFO : PUT result: OK Jan 16 18:03:34.597764 ignition[1286]: DEBUG : files: compiled without relabeling support, skipping Jan 16 18:03:34.602551 ignition[1286]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 18:03:34.602551 ignition[1286]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 18:03:34.613669 ignition[1286]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 18:03:34.618140 ignition[1286]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 18:03:34.622477 unknown[1286]: wrote ssh authorized keys file for user: core Jan 16 18:03:34.624919 ignition[1286]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 18:03:34.633595 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 16 18:03:34.633595 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 16 18:03:34.733774 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 16 18:03:34.909761 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 16 18:03:34.909761 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 16 18:03:34.919578 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 18:03:34.919578 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 18:03:34.919578 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 18:03:34.919578 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 18:03:34.919578 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 18:03:34.919578 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 18:03:34.919578 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 18:03:34.947573 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 18:03:34.947573 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 18:03:34.947573 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 18:03:34.947573 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 18:03:34.947573 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 18:03:34.947573 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 16 18:03:35.401366 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 16 18:03:35.815555 ignition[1286]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 18:03:35.815555 ignition[1286]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 16 18:03:35.823532 ignition[1286]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 18:03:35.831518 ignition[1286]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 18:03:35.831518 ignition[1286]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 16 18:03:35.831518 ignition[1286]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 16 18:03:35.842537 ignition[1286]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 18:03:35.842537 ignition[1286]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 18:03:35.842537 ignition[1286]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 18:03:35.842537 ignition[1286]: INFO : files: files passed Jan 16 18:03:35.842537 ignition[1286]: INFO : Ignition finished successfully Jan 16 18:03:35.848207 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 18:03:35.868545 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 16 18:03:35.868588 kernel: audit: type=1130 audit(1768586615.859:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:35.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:35.862283 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 18:03:35.876648 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 18:03:35.907678 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 18:03:35.908192 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 18:03:35.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:35.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:35.924197 kernel: audit: type=1130 audit(1768586615.913:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:35.924253 kernel: audit: type=1131 audit(1768586615.913:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:35.938263 initrd-setup-root-after-ignition[1316]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 18:03:35.942328 initrd-setup-root-after-ignition[1320]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 18:03:35.945821 initrd-setup-root-after-ignition[1316]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 18:03:35.945429 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 18:03:35.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:35.960882 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 18:03:35.967535 kernel: audit: type=1130 audit(1768586615.954:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:35.968829 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 18:03:36.072888 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 18:03:36.087004 kernel: audit: type=1130 audit(1768586616.074:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.087045 kernel: audit: type=1131 audit(1768586616.074:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.073094 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 18:03:36.076510 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 18:03:36.092540 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 18:03:36.095945 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 18:03:36.101473 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 18:03:36.146211 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 18:03:36.153231 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 18:03:36.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.165363 kernel: audit: type=1130 audit(1768586616.147:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.194220 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 16 18:03:36.194734 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 18:03:36.202685 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 18:03:36.205586 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 18:03:36.208129 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 18:03:36.208407 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 18:03:36.231305 kernel: audit: type=1131 audit(1768586616.215:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.222475 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 18:03:36.223413 systemd[1]: Stopped target basic.target - Basic System. Jan 16 18:03:36.223878 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 18:03:36.224569 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 18:03:36.236582 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 18:03:36.239895 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 16 18:03:36.247602 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 18:03:36.249664 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 18:03:36.254425 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 18:03:36.259545 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 18:03:36.281988 kernel: audit: type=1131 audit(1768586616.272:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.264019 systemd[1]: Stopped target swap.target - Swaps. Jan 16 18:03:36.268839 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 18:03:36.269105 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 18:03:36.282274 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 18:03:36.288464 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 18:03:36.291492 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 18:03:36.312437 kernel: audit: type=1131 audit(1768586616.302:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.297972 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 18:03:36.298240 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 18:03:36.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.298470 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 18:03:36.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.312682 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 18:03:36.313032 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 18:03:36.316440 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 18:03:36.316654 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 18:03:36.332780 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 18:03:36.345983 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 18:03:36.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.352240 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 18:03:36.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.352549 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 18:03:36.358505 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 18:03:36.358826 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 18:03:36.361907 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 18:03:36.362222 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 18:03:36.401821 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 18:03:36.404236 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 18:03:36.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.422398 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 18:03:36.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.429747 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 18:03:36.437721 ignition[1340]: INFO : Ignition 2.24.0 Jan 16 18:03:36.437721 ignition[1340]: INFO : Stage: umount Jan 16 18:03:36.437721 ignition[1340]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 18:03:36.437721 ignition[1340]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 18:03:36.437721 ignition[1340]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 18:03:36.430036 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 18:03:36.463295 ignition[1340]: INFO : PUT result: OK Jan 16 18:03:36.464999 ignition[1340]: INFO : umount: umount passed Jan 16 18:03:36.464999 ignition[1340]: INFO : Ignition finished successfully Jan 16 18:03:36.467961 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 18:03:36.468241 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 18:03:36.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.477613 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 18:03:36.477861 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 18:03:36.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.485036 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 18:03:36.485652 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 18:03:36.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.491776 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 18:03:36.491995 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 18:03:36.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.498458 systemd[1]: Stopped target network.target - Network. Jan 16 18:03:36.502722 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 18:03:36.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.502825 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 18:03:36.505717 systemd[1]: Stopped target paths.target - Path Units. Jan 16 18:03:36.512592 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 18:03:36.514499 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 18:03:36.524294 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 18:03:36.526498 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 18:03:36.532549 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 18:03:36.532743 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 18:03:36.539375 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 18:03:36.539452 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 18:03:36.544214 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 16 18:03:36.544275 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 16 18:03:36.548942 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 18:03:36.554165 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 18:03:36.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.560318 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 18:03:36.560529 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 18:03:36.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.567945 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 18:03:36.568940 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 18:03:36.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.575342 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 18:03:36.580142 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 18:03:36.590258 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 18:03:36.593236 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 18:03:36.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.600743 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 18:03:36.601162 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 18:03:36.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.611000 audit: BPF prog-id=6 op=UNLOAD Jan 16 18:03:36.612604 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 16 18:03:36.611000 audit: BPF prog-id=9 op=UNLOAD Jan 16 18:03:36.617639 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 18:03:36.617743 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 18:03:36.626331 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 18:03:36.629553 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 18:03:36.629675 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 18:03:36.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.639963 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 18:03:36.640271 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 18:03:36.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.653428 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 18:03:36.654017 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 18:03:36.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.660657 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 18:03:36.690582 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 18:03:36.691110 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 18:03:36.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.703053 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 18:03:36.703265 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 18:03:36.710828 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 18:03:36.710916 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 18:03:36.713616 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 18:03:36.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.713715 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 18:03:36.725941 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 18:03:36.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.726050 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 18:03:36.729283 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 18:03:36.729391 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 18:03:36.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.745261 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 18:03:36.748612 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 16 18:03:36.748738 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 18:03:36.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.756949 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 18:03:36.757066 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 18:03:36.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.767653 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 16 18:03:36.767775 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 18:03:36.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.778018 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 18:03:36.781038 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 18:03:36.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.786172 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 18:03:36.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.788806 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 18:03:36.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.795067 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 18:03:36.795288 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 18:03:36.809016 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 18:03:36.811555 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 18:03:36.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.817411 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 18:03:36.823932 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 18:03:36.854772 systemd[1]: Switching root. Jan 16 18:03:36.900166 systemd-journald[360]: Journal stopped Jan 16 18:03:38.940700 systemd-journald[360]: Received SIGTERM from PID 1 (systemd). Jan 16 18:03:38.940825 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 18:03:38.940878 kernel: SELinux: policy capability open_perms=1 Jan 16 18:03:38.940918 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 18:03:38.940950 kernel: SELinux: policy capability always_check_network=0 Jan 16 18:03:38.940984 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 18:03:38.941017 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 18:03:38.941047 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 18:03:38.941082 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 18:03:38.941130 kernel: SELinux: policy capability userspace_initial_context=0 Jan 16 18:03:38.941171 systemd[1]: Successfully loaded SELinux policy in 79.726ms. Jan 16 18:03:38.941222 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.933ms. Jan 16 18:03:38.941260 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 16 18:03:38.941296 systemd[1]: Detected virtualization amazon. Jan 16 18:03:38.941331 systemd[1]: Detected architecture arm64. Jan 16 18:03:38.941363 systemd[1]: Detected first boot. Jan 16 18:03:38.941395 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 16 18:03:38.941426 zram_generator::config[1383]: No configuration found. Jan 16 18:03:38.941474 kernel: NET: Registered PF_VSOCK protocol family Jan 16 18:03:38.941507 systemd[1]: Populated /etc with preset unit settings. Jan 16 18:03:38.941542 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 18:03:38.941575 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 18:03:38.941610 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 18:03:38.941643 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 18:03:38.941673 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 18:03:38.941706 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 18:03:38.941741 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 18:03:38.941771 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 18:03:38.941804 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 18:03:38.941856 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 18:03:38.941891 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 18:03:38.941926 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 18:03:38.941962 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 18:03:38.941995 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 18:03:38.942024 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 18:03:38.942056 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 18:03:38.942086 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 18:03:38.944197 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 16 18:03:38.944258 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 18:03:38.944293 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 18:03:38.944334 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 18:03:38.944368 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 18:03:38.944409 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 18:03:38.944440 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 18:03:38.944475 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 18:03:38.944510 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 18:03:38.944545 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 16 18:03:38.944581 systemd[1]: Reached target slices.target - Slice Units. Jan 16 18:03:38.944613 systemd[1]: Reached target swap.target - Swaps. Jan 16 18:03:38.944642 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 18:03:38.944672 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 18:03:38.944705 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 16 18:03:38.944745 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 16 18:03:38.944779 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 16 18:03:38.944812 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 18:03:38.944843 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 16 18:03:38.944876 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 16 18:03:38.944906 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 18:03:38.944936 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 18:03:38.944969 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 18:03:38.945003 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 18:03:38.945036 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 18:03:38.945069 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 18:03:38.945098 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 18:03:38.945183 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 18:03:38.945220 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 18:03:38.945252 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 18:03:38.945288 systemd[1]: Reached target machines.target - Containers. Jan 16 18:03:38.945318 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 18:03:38.945352 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 18:03:38.945384 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 18:03:38.945416 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 18:03:38.945448 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 18:03:38.945479 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 18:03:38.945514 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 18:03:38.945544 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 18:03:38.945576 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 18:03:38.945607 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 18:03:38.945639 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 18:03:38.945669 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 18:03:38.945702 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 18:03:38.945732 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 18:03:38.945765 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 18:03:38.945798 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 18:03:38.945856 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 18:03:38.945889 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 18:03:38.945919 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 18:03:38.945951 kernel: ACPI: bus type drm_connector registered Jan 16 18:03:38.945980 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 16 18:03:38.946013 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 18:03:38.946045 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 18:03:38.946081 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 18:03:38.948152 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 18:03:38.948224 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 18:03:38.948259 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 18:03:38.948291 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 18:03:38.948321 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 18:03:38.948353 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 18:03:38.948392 kernel: fuse: init (API version 7.41) Jan 16 18:03:38.948426 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 18:03:38.948460 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 18:03:38.948491 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 18:03:38.948526 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 18:03:38.948559 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 18:03:38.948589 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 18:03:38.948619 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 18:03:38.948649 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 18:03:38.948678 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 18:03:38.948708 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 18:03:38.948742 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 18:03:38.948776 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 18:03:38.948807 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 18:03:38.948840 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 18:03:38.948874 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 18:03:38.948905 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 16 18:03:38.948935 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 18:03:38.948969 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 18:03:38.949001 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 16 18:03:38.949080 systemd-journald[1462]: Collecting audit messages is enabled. Jan 16 18:03:38.949191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 18:03:38.949227 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 18:03:38.949258 systemd-journald[1462]: Journal started Jan 16 18:03:38.949304 systemd-journald[1462]: Runtime Journal (/run/log/journal/ec2982262f99144e60a3fd9e2b69a3a8) is 8M, max 75.3M, 67.3M free. Jan 16 18:03:38.420000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 16 18:03:38.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.661000 audit: BPF prog-id=14 op=UNLOAD Jan 16 18:03:38.661000 audit: BPF prog-id=13 op=UNLOAD Jan 16 18:03:38.662000 audit: BPF prog-id=15 op=LOAD Jan 16 18:03:38.665000 audit: BPF prog-id=16 op=LOAD Jan 16 18:03:38.665000 audit: BPF prog-id=17 op=LOAD Jan 16 18:03:38.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.957959 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 18:03:38.958012 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 18:03:38.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:38.933000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 16 18:03:38.933000 audit[1462]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffe692d990 a2=4000 a3=0 items=0 ppid=1 pid=1462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:38.933000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 16 18:03:38.247375 systemd[1]: Queued start job for default target multi-user.target. Jan 16 18:03:38.263058 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 16 18:03:38.263974 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 18:03:38.974140 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 18:03:38.974226 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 18:03:38.985524 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 18:03:38.993895 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 18:03:39.018155 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 18:03:39.018278 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 18:03:39.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:39.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:39.022403 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 16 18:03:39.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:39.063966 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 18:03:39.074328 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 18:03:39.085528 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 18:03:39.096287 kernel: loop1: detected capacity change from 0 to 61504 Jan 16 18:03:39.093664 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 16 18:03:39.146184 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 18:03:39.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:39.155295 systemd-journald[1462]: Time spent on flushing to /var/log/journal/ec2982262f99144e60a3fd9e2b69a3a8 is 76.637ms for 1050 entries. Jan 16 18:03:39.155295 systemd-journald[1462]: System Journal (/var/log/journal/ec2982262f99144e60a3fd9e2b69a3a8) is 8M, max 588.1M, 580.1M free. Jan 16 18:03:39.265510 systemd-journald[1462]: Received client request to flush runtime journal. Jan 16 18:03:39.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:39.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:39.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:39.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:39.195189 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 18:03:39.200555 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 16 18:03:39.208587 systemd-tmpfiles[1492]: ACLs are not supported, ignoring. Jan 16 18:03:39.208612 systemd-tmpfiles[1492]: ACLs are not supported, ignoring. Jan 16 18:03:39.223264 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 18:03:39.233876 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 18:03:39.242523 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 18:03:39.267708 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 18:03:39.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:39.273220 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 18:03:39.283155 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 18:03:39.293323 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 18:03:39.319466 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 18:03:39.323506 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 18:03:39.357340 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 18:03:39.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:39.362000 audit: BPF prog-id=18 op=LOAD Jan 16 18:03:39.363000 audit: BPF prog-id=19 op=LOAD Jan 16 18:03:39.363000 audit: BPF prog-id=20 op=LOAD Jan 16 18:03:39.371000 audit: BPF prog-id=21 op=LOAD Jan 16 18:03:39.368461 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 16 18:03:39.376587 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 18:03:39.386453 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 18:03:39.400000 audit: BPF prog-id=22 op=LOAD Jan 16 18:03:39.402000 audit: BPF prog-id=23 op=LOAD Jan 16 18:03:39.402000 audit: BPF prog-id=24 op=LOAD Jan 16 18:03:39.406263 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 18:03:39.416000 audit: BPF prog-id=25 op=LOAD Jan 16 18:03:39.416000 audit: BPF prog-id=26 op=LOAD Jan 16 18:03:39.416000 audit: BPF prog-id=27 op=LOAD Jan 16 18:03:39.420576 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 16 18:03:39.482022 kernel: loop2: detected capacity change from 0 to 100192 Jan 16 18:03:39.486868 systemd-tmpfiles[1539]: ACLs are not supported, ignoring. Jan 16 18:03:39.486910 systemd-tmpfiles[1539]: ACLs are not supported, ignoring. Jan 16 18:03:39.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:39.503314 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 18:03:39.548673 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 18:03:39.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:39.571156 kernel: loop3: detected capacity change from 0 to 207008 Jan 16 18:03:39.590642 systemd-nsresourced[1541]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 16 18:03:39.595523 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 16 18:03:39.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:39.741563 systemd-resolved[1538]: Positive Trust Anchors: Jan 16 18:03:39.741597 systemd-resolved[1538]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 18:03:39.741607 systemd-resolved[1538]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 16 18:03:39.741670 systemd-resolved[1538]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 18:03:39.771449 systemd-resolved[1538]: Defaulting to hostname 'linux'. Jan 16 18:03:39.775879 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 18:03:39.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:39.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:39.780581 systemd-oomd[1537]: No swap; memory pressure usage will be degraded Jan 16 18:03:39.781942 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 16 18:03:39.785106 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 18:03:39.882194 kernel: loop4: detected capacity change from 0 to 45344 Jan 16 18:03:39.923186 kernel: loop5: detected capacity change from 0 to 61504 Jan 16 18:03:39.937165 kernel: loop6: detected capacity change from 0 to 100192 Jan 16 18:03:39.956223 kernel: loop7: detected capacity change from 0 to 207008 Jan 16 18:03:39.991146 kernel: loop1: detected capacity change from 0 to 45344 Jan 16 18:03:40.010364 (sd-merge)[1564]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Jan 16 18:03:40.017472 (sd-merge)[1564]: Merged extensions into '/usr'. Jan 16 18:03:40.025102 systemd[1]: Reload requested from client PID 1490 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 18:03:40.025584 systemd[1]: Reloading... Jan 16 18:03:40.228243 zram_generator::config[1594]: No configuration found. Jan 16 18:03:40.638433 systemd[1]: Reloading finished in 611 ms. Jan 16 18:03:40.660813 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 18:03:40.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:40.664364 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 18:03:40.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:40.683342 systemd[1]: Starting ensure-sysext.service... Jan 16 18:03:40.692713 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 18:03:40.697000 audit: BPF prog-id=8 op=UNLOAD Jan 16 18:03:40.699000 audit: BPF prog-id=7 op=UNLOAD Jan 16 18:03:40.700000 audit: BPF prog-id=28 op=LOAD Jan 16 18:03:40.704000 audit: BPF prog-id=29 op=LOAD Jan 16 18:03:40.707049 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 18:03:40.714000 audit: BPF prog-id=30 op=LOAD Jan 16 18:03:40.714000 audit: BPF prog-id=22 op=UNLOAD Jan 16 18:03:40.714000 audit: BPF prog-id=31 op=LOAD Jan 16 18:03:40.714000 audit: BPF prog-id=32 op=LOAD Jan 16 18:03:40.714000 audit: BPF prog-id=23 op=UNLOAD Jan 16 18:03:40.714000 audit: BPF prog-id=24 op=UNLOAD Jan 16 18:03:40.716000 audit: BPF prog-id=33 op=LOAD Jan 16 18:03:40.716000 audit: BPF prog-id=15 op=UNLOAD Jan 16 18:03:40.716000 audit: BPF prog-id=34 op=LOAD Jan 16 18:03:40.716000 audit: BPF prog-id=35 op=LOAD Jan 16 18:03:40.716000 audit: BPF prog-id=16 op=UNLOAD Jan 16 18:03:40.716000 audit: BPF prog-id=17 op=UNLOAD Jan 16 18:03:40.719000 audit: BPF prog-id=36 op=LOAD Jan 16 18:03:40.719000 audit: BPF prog-id=21 op=UNLOAD Jan 16 18:03:40.721000 audit: BPF prog-id=37 op=LOAD Jan 16 18:03:40.722000 audit: BPF prog-id=25 op=UNLOAD Jan 16 18:03:40.722000 audit: BPF prog-id=38 op=LOAD Jan 16 18:03:40.722000 audit: BPF prog-id=39 op=LOAD Jan 16 18:03:40.722000 audit: BPF prog-id=26 op=UNLOAD Jan 16 18:03:40.722000 audit: BPF prog-id=27 op=UNLOAD Jan 16 18:03:40.723000 audit: BPF prog-id=40 op=LOAD Jan 16 18:03:40.723000 audit: BPF prog-id=18 op=UNLOAD Jan 16 18:03:40.724000 audit: BPF prog-id=41 op=LOAD Jan 16 18:03:40.724000 audit: BPF prog-id=42 op=LOAD Jan 16 18:03:40.724000 audit: BPF prog-id=19 op=UNLOAD Jan 16 18:03:40.724000 audit: BPF prog-id=20 op=UNLOAD Jan 16 18:03:40.737843 systemd[1]: Reload requested from client PID 1646 ('systemctl') (unit ensure-sysext.service)... Jan 16 18:03:40.737875 systemd[1]: Reloading... Jan 16 18:03:40.757796 systemd-tmpfiles[1647]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 16 18:03:40.759450 systemd-tmpfiles[1647]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 16 18:03:40.760987 systemd-tmpfiles[1647]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 18:03:40.771475 systemd-tmpfiles[1647]: ACLs are not supported, ignoring. Jan 16 18:03:40.772285 systemd-tmpfiles[1647]: ACLs are not supported, ignoring. Jan 16 18:03:40.792468 systemd-udevd[1648]: Using default interface naming scheme 'v257'. Jan 16 18:03:40.803504 systemd-tmpfiles[1647]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 18:03:40.805841 systemd-tmpfiles[1647]: Skipping /boot Jan 16 18:03:40.850797 systemd-tmpfiles[1647]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 18:03:40.850819 systemd-tmpfiles[1647]: Skipping /boot Jan 16 18:03:40.980170 zram_generator::config[1695]: No configuration found. Jan 16 18:03:41.039288 (udev-worker)[1676]: Network interface NamePolicy= disabled on kernel command line. Jan 16 18:03:41.587378 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 16 18:03:41.588790 systemd[1]: Reloading finished in 850 ms. Jan 16 18:03:41.621154 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 18:03:41.631712 kernel: kauditd_printk_skb: 135 callbacks suppressed Jan 16 18:03:41.631839 kernel: audit: type=1130 audit(1768586621.624:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:41.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:41.635643 kernel: audit: type=1334 audit(1768586621.630:182): prog-id=43 op=LOAD Jan 16 18:03:41.630000 audit: BPF prog-id=43 op=LOAD Jan 16 18:03:41.638437 kernel: audit: type=1334 audit(1768586621.630:183): prog-id=44 op=LOAD Jan 16 18:03:41.630000 audit: BPF prog-id=44 op=LOAD Jan 16 18:03:41.641146 kernel: audit: type=1334 audit(1768586621.630:184): prog-id=28 op=UNLOAD Jan 16 18:03:41.630000 audit: BPF prog-id=28 op=UNLOAD Jan 16 18:03:41.642906 kernel: audit: type=1334 audit(1768586621.630:185): prog-id=29 op=UNLOAD Jan 16 18:03:41.630000 audit: BPF prog-id=29 op=UNLOAD Jan 16 18:03:41.645636 kernel: audit: type=1334 audit(1768586621.637:186): prog-id=45 op=LOAD Jan 16 18:03:41.637000 audit: BPF prog-id=45 op=LOAD Jan 16 18:03:41.647635 kernel: audit: type=1334 audit(1768586621.637:187): prog-id=40 op=UNLOAD Jan 16 18:03:41.637000 audit: BPF prog-id=40 op=UNLOAD Jan 16 18:03:41.650643 kernel: audit: type=1334 audit(1768586621.637:188): prog-id=46 op=LOAD Jan 16 18:03:41.637000 audit: BPF prog-id=46 op=LOAD Jan 16 18:03:41.652812 kernel: audit: type=1334 audit(1768586621.637:189): prog-id=47 op=LOAD Jan 16 18:03:41.637000 audit: BPF prog-id=47 op=LOAD Jan 16 18:03:41.655738 kernel: audit: type=1334 audit(1768586621.637:190): prog-id=41 op=UNLOAD Jan 16 18:03:41.637000 audit: BPF prog-id=41 op=UNLOAD Jan 16 18:03:41.637000 audit: BPF prog-id=42 op=UNLOAD Jan 16 18:03:41.639000 audit: BPF prog-id=48 op=LOAD Jan 16 18:03:41.639000 audit: BPF prog-id=36 op=UNLOAD Jan 16 18:03:41.644000 audit: BPF prog-id=49 op=LOAD Jan 16 18:03:41.644000 audit: BPF prog-id=37 op=UNLOAD Jan 16 18:03:41.644000 audit: BPF prog-id=50 op=LOAD Jan 16 18:03:41.644000 audit: BPF prog-id=51 op=LOAD Jan 16 18:03:41.644000 audit: BPF prog-id=38 op=UNLOAD Jan 16 18:03:41.644000 audit: BPF prog-id=39 op=UNLOAD Jan 16 18:03:41.648000 audit: BPF prog-id=52 op=LOAD Jan 16 18:03:41.648000 audit: BPF prog-id=33 op=UNLOAD Jan 16 18:03:41.652000 audit: BPF prog-id=53 op=LOAD Jan 16 18:03:41.652000 audit: BPF prog-id=54 op=LOAD Jan 16 18:03:41.652000 audit: BPF prog-id=34 op=UNLOAD Jan 16 18:03:41.652000 audit: BPF prog-id=35 op=UNLOAD Jan 16 18:03:41.654000 audit: BPF prog-id=55 op=LOAD Jan 16 18:03:41.654000 audit: BPF prog-id=30 op=UNLOAD Jan 16 18:03:41.654000 audit: BPF prog-id=56 op=LOAD Jan 16 18:03:41.654000 audit: BPF prog-id=57 op=LOAD Jan 16 18:03:41.654000 audit: BPF prog-id=31 op=UNLOAD Jan 16 18:03:41.654000 audit: BPF prog-id=32 op=UNLOAD Jan 16 18:03:41.729937 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 18:03:41.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:41.817963 systemd[1]: Finished ensure-sysext.service. Jan 16 18:03:41.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:41.870357 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 16 18:03:41.902651 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 18:03:41.912442 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 18:03:41.914640 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 18:03:41.920072 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 18:03:41.929836 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 18:03:41.940522 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 18:03:41.946393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 18:03:41.949055 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 18:03:41.949280 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 18:03:41.956454 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 18:03:41.963241 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 18:03:41.967316 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 18:03:41.971721 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 18:03:41.975000 audit: BPF prog-id=58 op=LOAD Jan 16 18:03:41.981918 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 18:03:41.984533 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 18:03:41.992554 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 18:03:42.003541 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 18:03:42.055309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 18:03:42.068274 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 18:03:42.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:42.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:42.088873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 18:03:42.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:42.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:42.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:42.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:42.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:42.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:42.096512 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 18:03:42.098623 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 18:03:42.099029 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 18:03:42.101561 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 18:03:42.108819 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 18:03:42.109329 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 18:03:42.112254 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 18:03:42.121000 audit[1871]: SYSTEM_BOOT pid=1871 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 16 18:03:42.143836 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 18:03:42.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:42.149485 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 18:03:42.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:42.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:42.235210 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 18:03:42.271218 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 18:03:42.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:42.274415 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 18:03:42.276000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 16 18:03:42.276000 audit[1904]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe7b1ce10 a2=420 a3=0 items=0 ppid=1859 pid=1904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:42.276000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 18:03:42.280906 augenrules[1904]: No rules Jan 16 18:03:42.283813 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 18:03:42.286281 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 18:03:42.308014 systemd-networkd[1870]: lo: Link UP Jan 16 18:03:42.308624 systemd-networkd[1870]: lo: Gained carrier Jan 16 18:03:42.311468 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 18:03:42.320474 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 18:03:42.323439 systemd[1]: Reached target network.target - Network. Jan 16 18:03:42.328409 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 16 18:03:42.331013 systemd-networkd[1870]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 18:03:42.331229 systemd-networkd[1870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 18:03:42.337437 systemd-networkd[1870]: eth0: Link UP Jan 16 18:03:42.337875 systemd-networkd[1870]: eth0: Gained carrier Jan 16 18:03:42.337914 systemd-networkd[1870]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 18:03:42.338503 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 18:03:42.353666 systemd-networkd[1870]: eth0: DHCPv4 address 172.31.22.249/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 16 18:03:42.380058 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 16 18:03:42.811030 ldconfig[1866]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 18:03:42.823346 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 18:03:42.828712 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 18:03:42.856662 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 18:03:42.859858 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 18:03:42.862489 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 18:03:42.865601 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 18:03:42.868768 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 18:03:42.871519 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 18:03:42.874547 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 16 18:03:42.877458 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 16 18:03:42.880107 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 18:03:42.882995 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 18:03:42.883052 systemd[1]: Reached target paths.target - Path Units. Jan 16 18:03:42.885156 systemd[1]: Reached target timers.target - Timer Units. Jan 16 18:03:42.888291 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 18:03:42.893663 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 18:03:42.899787 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 16 18:03:42.903091 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 16 18:03:42.906105 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 16 18:03:42.921339 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 18:03:42.924599 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 16 18:03:42.928417 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 18:03:42.930997 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 18:03:42.933144 systemd[1]: Reached target basic.target - Basic System. Jan 16 18:03:42.935516 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 18:03:42.935566 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 18:03:42.939307 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 18:03:42.944624 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 18:03:42.951577 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 18:03:42.959621 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 18:03:42.966506 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 18:03:42.972605 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 18:03:42.975073 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 18:03:42.982277 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 18:03:42.990062 systemd[1]: Started ntpd.service - Network Time Service. Jan 16 18:03:43.006540 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 18:03:43.021638 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 16 18:03:43.036538 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 18:03:43.047457 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 18:03:43.063543 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 18:03:43.065247 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 18:03:43.067217 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 18:03:43.068494 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 18:03:43.072383 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 18:03:43.079054 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 18:03:43.101916 jq[1925]: false Jan 16 18:03:43.113244 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 18:03:43.113781 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 18:03:43.118751 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 18:03:43.176768 jq[1937]: true Jan 16 18:03:43.177240 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 18:03:43.186037 extend-filesystems[1926]: Found /dev/nvme0n1p6 Jan 16 18:03:43.191471 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 18:03:43.224679 ntpd[1928]: ntpd 4.2.8p18@1.4062-o Fri Jan 16 02:35:22 UTC 2026 (1): Starting Jan 16 18:03:43.227751 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: ntpd 4.2.8p18@1.4062-o Fri Jan 16 02:35:22 UTC 2026 (1): Starting Jan 16 18:03:43.227751 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 16 18:03:43.227751 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: ---------------------------------------------------- Jan 16 18:03:43.227751 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: ntp-4 is maintained by Network Time Foundation, Jan 16 18:03:43.227751 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 16 18:03:43.227751 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: corporation. Support and training for ntp-4 are Jan 16 18:03:43.227751 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: available at https://www.nwtime.org/support Jan 16 18:03:43.227751 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: ---------------------------------------------------- Jan 16 18:03:43.224790 ntpd[1928]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 16 18:03:43.224810 ntpd[1928]: ---------------------------------------------------- Jan 16 18:03:43.224826 ntpd[1928]: ntp-4 is maintained by Network Time Foundation, Jan 16 18:03:43.224843 ntpd[1928]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 16 18:03:43.224860 ntpd[1928]: corporation. Support and training for ntp-4 are Jan 16 18:03:43.224876 ntpd[1928]: available at https://www.nwtime.org/support Jan 16 18:03:43.224893 ntpd[1928]: ---------------------------------------------------- Jan 16 18:03:43.239480 ntpd[1928]: proto: precision = 0.096 usec (-23) Jan 16 18:03:43.239716 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: proto: precision = 0.096 usec (-23) Jan 16 18:03:43.247499 ntpd[1928]: basedate set to 2026-01-04 Jan 16 18:03:43.256781 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: basedate set to 2026-01-04 Jan 16 18:03:43.256781 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: gps base set to 2026-01-04 (week 2400) Jan 16 18:03:43.256781 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: Listen and drop on 0 v6wildcard [::]:123 Jan 16 18:03:43.256781 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 16 18:03:43.256781 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: Listen normally on 2 lo 127.0.0.1:123 Jan 16 18:03:43.256781 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: Listen normally on 3 eth0 172.31.22.249:123 Jan 16 18:03:43.256781 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: Listen normally on 4 lo [::1]:123 Jan 16 18:03:43.256781 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: bind(21) AF_INET6 [fe80::496:b3ff:fef6:c2db%2]:123 flags 0x811 failed: Cannot assign requested address Jan 16 18:03:43.256781 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: unable to create socket on eth0 (5) for [fe80::496:b3ff:fef6:c2db%2]:123 Jan 16 18:03:43.256781 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: cannot bind address fe80::496:b3ff:fef6:c2db%2 Jan 16 18:03:43.256781 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: Listening on routing socket on fd #21 for interface updates Jan 16 18:03:43.258698 extend-filesystems[1926]: Found /dev/nvme0n1p9 Jan 16 18:03:43.247541 ntpd[1928]: gps base set to 2026-01-04 (week 2400) Jan 16 18:03:43.276111 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 16 18:03:43.276111 ntpd[1928]: 16 Jan 18:03:43 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 16 18:03:43.276299 extend-filesystems[1926]: Checking size of /dev/nvme0n1p9 Jan 16 18:03:43.247732 ntpd[1928]: Listen and drop on 0 v6wildcard [::]:123 Jan 16 18:03:43.247779 ntpd[1928]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 16 18:03:43.248091 ntpd[1928]: Listen normally on 2 lo 127.0.0.1:123 Jan 16 18:03:43.248170 ntpd[1928]: Listen normally on 3 eth0 172.31.22.249:123 Jan 16 18:03:43.248257 ntpd[1928]: Listen normally on 4 lo [::1]:123 Jan 16 18:03:43.248322 ntpd[1928]: bind(21) AF_INET6 [fe80::496:b3ff:fef6:c2db%2]:123 flags 0x811 failed: Cannot assign requested address Jan 16 18:03:43.248361 ntpd[1928]: unable to create socket on eth0 (5) for [fe80::496:b3ff:fef6:c2db%2]:123 Jan 16 18:03:43.248387 ntpd[1928]: cannot bind address fe80::496:b3ff:fef6:c2db%2 Jan 16 18:03:43.248431 ntpd[1928]: Listening on routing socket on fd #21 for interface updates Jan 16 18:03:43.271483 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 16 18:03:43.271542 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 16 18:03:43.292212 tar[1941]: linux-arm64/LICENSE Jan 16 18:03:43.292212 tar[1941]: linux-arm64/helm Jan 16 18:03:43.293258 dbus-daemon[1923]: [system] SELinux support is enabled Jan 16 18:03:43.293672 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 18:03:43.301736 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 18:03:43.301811 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 18:03:43.305459 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 18:03:43.305499 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 18:03:43.324834 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 18:03:43.327454 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 18:03:43.334745 update_engine[1936]: I20260116 18:03:43.334347 1936 main.cc:92] Flatcar Update Engine starting Jan 16 18:03:43.353703 jq[1963]: true Jan 16 18:03:43.373231 dbus-daemon[1923]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1870 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 16 18:03:43.382296 extend-filesystems[1926]: Resized partition /dev/nvme0n1p9 Jan 16 18:03:43.375210 dbus-daemon[1923]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 16 18:03:43.386845 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 16 18:03:43.396961 systemd[1]: Started update-engine.service - Update Engine. Jan 16 18:03:43.406475 update_engine[1936]: I20260116 18:03:43.403636 1936 update_check_scheduler.cc:74] Next update check in 8m30s Jan 16 18:03:43.419146 extend-filesystems[1989]: resize2fs 1.47.3 (8-Jul-2025) Jan 16 18:03:43.431824 coreos-metadata[1922]: Jan 16 18:03:43.431 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 16 18:03:43.449149 coreos-metadata[1922]: Jan 16 18:03:43.440 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 16 18:03:43.449672 coreos-metadata[1922]: Jan 16 18:03:43.449 INFO Fetch successful Jan 16 18:03:43.449672 coreos-metadata[1922]: Jan 16 18:03:43.449 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 16 18:03:43.452724 coreos-metadata[1922]: Jan 16 18:03:43.452 INFO Fetch successful Jan 16 18:03:43.452724 coreos-metadata[1922]: Jan 16 18:03:43.452 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 16 18:03:43.455255 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Jan 16 18:03:43.456207 coreos-metadata[1922]: Jan 16 18:03:43.456 INFO Fetch successful Jan 16 18:03:43.456207 coreos-metadata[1922]: Jan 16 18:03:43.456 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 16 18:03:43.457334 coreos-metadata[1922]: Jan 16 18:03:43.457 INFO Fetch successful Jan 16 18:03:43.457334 coreos-metadata[1922]: Jan 16 18:03:43.457 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 16 18:03:43.458361 coreos-metadata[1922]: Jan 16 18:03:43.458 INFO Fetch failed with 404: resource not found Jan 16 18:03:43.458361 coreos-metadata[1922]: Jan 16 18:03:43.458 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 16 18:03:43.461740 coreos-metadata[1922]: Jan 16 18:03:43.461 INFO Fetch successful Jan 16 18:03:43.461740 coreos-metadata[1922]: Jan 16 18:03:43.461 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 16 18:03:43.462737 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 18:03:43.466875 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 16 18:03:43.476243 coreos-metadata[1922]: Jan 16 18:03:43.469 INFO Fetch successful Jan 16 18:03:43.476243 coreos-metadata[1922]: Jan 16 18:03:43.470 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 16 18:03:43.480727 coreos-metadata[1922]: Jan 16 18:03:43.480 INFO Fetch successful Jan 16 18:03:43.480727 coreos-metadata[1922]: Jan 16 18:03:43.480 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 16 18:03:43.484826 coreos-metadata[1922]: Jan 16 18:03:43.481 INFO Fetch successful Jan 16 18:03:43.484826 coreos-metadata[1922]: Jan 16 18:03:43.481 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 16 18:03:43.495155 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Jan 16 18:03:43.495241 coreos-metadata[1922]: Jan 16 18:03:43.490 INFO Fetch successful Jan 16 18:03:43.496375 systemd-networkd[1870]: eth0: Gained IPv6LL Jan 16 18:03:43.510361 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 18:03:43.514147 extend-filesystems[1989]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 16 18:03:43.514147 extend-filesystems[1989]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 16 18:03:43.514147 extend-filesystems[1989]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Jan 16 18:03:43.547399 extend-filesystems[1926]: Resized filesystem in /dev/nvme0n1p9 Jan 16 18:03:43.520234 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 18:03:43.520762 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 18:03:43.537292 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 18:03:43.548772 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 16 18:03:43.568626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:03:43.587441 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 18:03:43.658636 systemd-logind[1935]: Watching system buttons on /dev/input/event0 (Power Button) Jan 16 18:03:43.658701 systemd-logind[1935]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 16 18:03:43.665459 systemd-logind[1935]: New seat seat0. Jan 16 18:03:43.713444 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 18:03:43.742175 bash[2028]: Updated "/home/core/.ssh/authorized_keys" Jan 16 18:03:43.749228 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 18:03:43.757470 systemd[1]: Starting sshkeys.service... Jan 16 18:03:43.804885 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 18:03:43.810438 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 18:03:43.816310 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 18:03:43.820023 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 18:03:43.858322 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 18:03:44.023677 containerd[1954]: time="2026-01-16T18:03:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 16 18:03:44.026668 containerd[1954]: time="2026-01-16T18:03:44.025046147Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 16 18:03:44.124151 coreos-metadata[2045]: Jan 16 18:03:44.119 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 16 18:03:44.124151 coreos-metadata[2045]: Jan 16 18:03:44.122 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 16 18:03:44.127237 coreos-metadata[2045]: Jan 16 18:03:44.125 INFO Fetch successful Jan 16 18:03:44.127237 coreos-metadata[2045]: Jan 16 18:03:44.125 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 16 18:03:44.132179 amazon-ssm-agent[2009]: Initializing new seelog logger Jan 16 18:03:44.132605 coreos-metadata[2045]: Jan 16 18:03:44.130 INFO Fetch successful Jan 16 18:03:44.144255 unknown[2045]: wrote ssh authorized keys file for user: core Jan 16 18:03:44.147156 amazon-ssm-agent[2009]: New Seelog Logger Creation Complete Jan 16 18:03:44.147156 amazon-ssm-agent[2009]: 2026/01/16 18:03:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 18:03:44.147156 amazon-ssm-agent[2009]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 18:03:44.154149 amazon-ssm-agent[2009]: 2026/01/16 18:03:44 processing appconfig overrides Jan 16 18:03:44.156315 amazon-ssm-agent[2009]: 2026/01/16 18:03:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 18:03:44.161610 amazon-ssm-agent[2009]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 18:03:44.161610 amazon-ssm-agent[2009]: 2026/01/16 18:03:44 processing appconfig overrides Jan 16 18:03:44.161610 amazon-ssm-agent[2009]: 2026/01/16 18:03:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 18:03:44.161610 amazon-ssm-agent[2009]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 18:03:44.161610 amazon-ssm-agent[2009]: 2026/01/16 18:03:44 processing appconfig overrides Jan 16 18:03:44.163399 amazon-ssm-agent[2009]: 2026-01-16 18:03:44.1546 INFO Proxy environment variables: Jan 16 18:03:44.170874 amazon-ssm-agent[2009]: 2026/01/16 18:03:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 18:03:44.171009 amazon-ssm-agent[2009]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 18:03:44.171506 amazon-ssm-agent[2009]: 2026/01/16 18:03:44 processing appconfig overrides Jan 16 18:03:44.181424 containerd[1954]: time="2026-01-16T18:03:44.181358939Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.536µs" Jan 16 18:03:44.183155 containerd[1954]: time="2026-01-16T18:03:44.182149607Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 16 18:03:44.183155 containerd[1954]: time="2026-01-16T18:03:44.182241623Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 16 18:03:44.183155 containerd[1954]: time="2026-01-16T18:03:44.182273315Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 16 18:03:44.183155 containerd[1954]: time="2026-01-16T18:03:44.182575607Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 16 18:03:44.183155 containerd[1954]: time="2026-01-16T18:03:44.182611967Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 16 18:03:44.183155 containerd[1954]: time="2026-01-16T18:03:44.182728103Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 16 18:03:44.183155 containerd[1954]: time="2026-01-16T18:03:44.182755775Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 16 18:03:44.191158 containerd[1954]: time="2026-01-16T18:03:44.187713959Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 16 18:03:44.191158 containerd[1954]: time="2026-01-16T18:03:44.187767995Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 16 18:03:44.191158 containerd[1954]: time="2026-01-16T18:03:44.187799219Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 16 18:03:44.191158 containerd[1954]: time="2026-01-16T18:03:44.187820735Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 16 18:03:44.191158 containerd[1954]: time="2026-01-16T18:03:44.190258343Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 16 18:03:44.191158 containerd[1954]: time="2026-01-16T18:03:44.190306787Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 16 18:03:44.191158 containerd[1954]: time="2026-01-16T18:03:44.190567919Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 16 18:03:44.191158 containerd[1954]: time="2026-01-16T18:03:44.190969163Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 16 18:03:44.191158 containerd[1954]: time="2026-01-16T18:03:44.191030255Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 16 18:03:44.191158 containerd[1954]: time="2026-01-16T18:03:44.191055587Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 16 18:03:44.193974 containerd[1954]: time="2026-01-16T18:03:44.193906811Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 16 18:03:44.194603 containerd[1954]: time="2026-01-16T18:03:44.194564531Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 16 18:03:44.197668 containerd[1954]: time="2026-01-16T18:03:44.197557931Z" level=info msg="metadata content store policy set" policy=shared Jan 16 18:03:44.212142 containerd[1954]: time="2026-01-16T18:03:44.211500275Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 16 18:03:44.212142 containerd[1954]: time="2026-01-16T18:03:44.211621943Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 16 18:03:44.212142 containerd[1954]: time="2026-01-16T18:03:44.211838195Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 16 18:03:44.212142 containerd[1954]: time="2026-01-16T18:03:44.211867847Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 16 18:03:44.212142 containerd[1954]: time="2026-01-16T18:03:44.211898867Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 16 18:03:44.212142 containerd[1954]: time="2026-01-16T18:03:44.211931579Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 16 18:03:44.212142 containerd[1954]: time="2026-01-16T18:03:44.211959563Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 16 18:03:44.212142 containerd[1954]: time="2026-01-16T18:03:44.212009243Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 16 18:03:44.212142 containerd[1954]: time="2026-01-16T18:03:44.212044019Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 16 18:03:44.212142 containerd[1954]: time="2026-01-16T18:03:44.212073647Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 16 18:03:44.213141 containerd[1954]: time="2026-01-16T18:03:44.212100575Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 16 18:03:44.213141 containerd[1954]: time="2026-01-16T18:03:44.212650835Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 16 18:03:44.213141 containerd[1954]: time="2026-01-16T18:03:44.212680547Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 16 18:03:44.213141 containerd[1954]: time="2026-01-16T18:03:44.212742227Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 16 18:03:44.213141 containerd[1954]: time="2026-01-16T18:03:44.212971343Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 16 18:03:44.213141 containerd[1954]: time="2026-01-16T18:03:44.213012767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 16 18:03:44.213141 containerd[1954]: time="2026-01-16T18:03:44.213046103Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 16 18:03:44.213141 containerd[1954]: time="2026-01-16T18:03:44.213073595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 16 18:03:44.215857 containerd[1954]: time="2026-01-16T18:03:44.213100007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 16 18:03:44.215857 containerd[1954]: time="2026-01-16T18:03:44.215310611Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 16 18:03:44.215857 containerd[1954]: time="2026-01-16T18:03:44.215370347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 16 18:03:44.215857 containerd[1954]: time="2026-01-16T18:03:44.215397971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 16 18:03:44.215857 containerd[1954]: time="2026-01-16T18:03:44.215436299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 16 18:03:44.215857 containerd[1954]: time="2026-01-16T18:03:44.215483639Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 16 18:03:44.215857 containerd[1954]: time="2026-01-16T18:03:44.215512175Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 16 18:03:44.215857 containerd[1954]: time="2026-01-16T18:03:44.215570099Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 16 18:03:44.215857 containerd[1954]: time="2026-01-16T18:03:44.215638103Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 16 18:03:44.215857 containerd[1954]: time="2026-01-16T18:03:44.215675639Z" level=info msg="Start snapshots syncer" Jan 16 18:03:44.220265 containerd[1954]: time="2026-01-16T18:03:44.219162552Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 16 18:03:44.224150 containerd[1954]: time="2026-01-16T18:03:44.223208016Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 16 18:03:44.224150 containerd[1954]: time="2026-01-16T18:03:44.223364016Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 16 18:03:44.224483 containerd[1954]: time="2026-01-16T18:03:44.223539216Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 16 18:03:44.224483 containerd[1954]: time="2026-01-16T18:03:44.223857648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 16 18:03:44.236729 containerd[1954]: time="2026-01-16T18:03:44.225366468Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 16 18:03:44.236729 containerd[1954]: time="2026-01-16T18:03:44.235583040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 16 18:03:44.236729 containerd[1954]: time="2026-01-16T18:03:44.235636104Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 16 18:03:44.236729 containerd[1954]: time="2026-01-16T18:03:44.235680096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 16 18:03:44.236729 containerd[1954]: time="2026-01-16T18:03:44.235712316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 16 18:03:44.236729 containerd[1954]: time="2026-01-16T18:03:44.235752180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 16 18:03:44.236729 containerd[1954]: time="2026-01-16T18:03:44.235792824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 16 18:03:44.236729 containerd[1954]: time="2026-01-16T18:03:44.235834176Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 16 18:03:44.236729 containerd[1954]: time="2026-01-16T18:03:44.235932228Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 16 18:03:44.236729 containerd[1954]: time="2026-01-16T18:03:44.235977480Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 16 18:03:44.236729 containerd[1954]: time="2026-01-16T18:03:44.236010048Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 16 18:03:44.236729 containerd[1954]: time="2026-01-16T18:03:44.236046096Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 16 18:03:44.236729 containerd[1954]: time="2026-01-16T18:03:44.236068932Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 16 18:03:44.243715 containerd[1954]: time="2026-01-16T18:03:44.236107416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 16 18:03:44.243715 containerd[1954]: time="2026-01-16T18:03:44.240101088Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 16 18:03:44.243715 containerd[1954]: time="2026-01-16T18:03:44.240360792Z" level=info msg="runtime interface created" Jan 16 18:03:44.243715 containerd[1954]: time="2026-01-16T18:03:44.240388776Z" level=info msg="created NRI interface" Jan 16 18:03:44.243715 containerd[1954]: time="2026-01-16T18:03:44.240425292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 16 18:03:44.243715 containerd[1954]: time="2026-01-16T18:03:44.240463800Z" level=info msg="Connect containerd service" Jan 16 18:03:44.243715 containerd[1954]: time="2026-01-16T18:03:44.240548772Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 18:03:44.248766 containerd[1954]: time="2026-01-16T18:03:44.248706360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 18:03:44.250575 update-ssh-keys[2094]: Updated "/home/core/.ssh/authorized_keys" Jan 16 18:03:44.254462 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 18:03:44.266505 systemd[1]: Finished sshkeys.service. Jan 16 18:03:44.272738 amazon-ssm-agent[2009]: 2026-01-16 18:03:44.1562 INFO https_proxy: Jan 16 18:03:44.379999 amazon-ssm-agent[2009]: 2026-01-16 18:03:44.1562 INFO http_proxy: Jan 16 18:03:44.393348 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 16 18:03:44.399851 dbus-daemon[1923]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 16 18:03:44.404875 dbus-daemon[1923]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1990 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 16 18:03:44.416725 systemd[1]: Starting polkit.service - Authorization Manager... Jan 16 18:03:44.485384 amazon-ssm-agent[2009]: 2026-01-16 18:03:44.1562 INFO no_proxy: Jan 16 18:03:44.583200 amazon-ssm-agent[2009]: 2026-01-16 18:03:44.1583 INFO Checking if agent identity type OnPrem can be assumed Jan 16 18:03:44.632630 containerd[1954]: time="2026-01-16T18:03:44.632482562Z" level=info msg="Start subscribing containerd event" Jan 16 18:03:44.632839 containerd[1954]: time="2026-01-16T18:03:44.632810162Z" level=info msg="Start recovering state" Jan 16 18:03:44.633030 containerd[1954]: time="2026-01-16T18:03:44.632980562Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 18:03:44.633136 containerd[1954]: time="2026-01-16T18:03:44.633085466Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 18:03:44.633311 containerd[1954]: time="2026-01-16T18:03:44.633264914Z" level=info msg="Start event monitor" Jan 16 18:03:44.633430 containerd[1954]: time="2026-01-16T18:03:44.633406490Z" level=info msg="Start cni network conf syncer for default" Jan 16 18:03:44.633545 containerd[1954]: time="2026-01-16T18:03:44.633522530Z" level=info msg="Start streaming server" Jan 16 18:03:44.633661 containerd[1954]: time="2026-01-16T18:03:44.633635450Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 16 18:03:44.633828 containerd[1954]: time="2026-01-16T18:03:44.633775742Z" level=info msg="runtime interface starting up..." Jan 16 18:03:44.633828 containerd[1954]: time="2026-01-16T18:03:44.633864338Z" level=info msg="starting plugins..." Jan 16 18:03:44.633828 containerd[1954]: time="2026-01-16T18:03:44.633903662Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 16 18:03:44.646161 containerd[1954]: time="2026-01-16T18:03:44.644378246Z" level=info msg="containerd successfully booted in 0.626417s" Jan 16 18:03:44.650060 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 18:03:44.684153 amazon-ssm-agent[2009]: 2026-01-16 18:03:44.1584 INFO Checking if agent identity type EC2 can be assumed Jan 16 18:03:44.782849 amazon-ssm-agent[2009]: 2026-01-16 18:03:44.4155 INFO Agent will take identity from EC2 Jan 16 18:03:44.866377 locksmithd[1991]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 18:03:44.885140 amazon-ssm-agent[2009]: 2026-01-16 18:03:44.4376 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 16 18:03:44.948619 sshd_keygen[1970]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 18:03:44.985726 amazon-ssm-agent[2009]: 2026-01-16 18:03:44.4377 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 16 18:03:45.062110 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 18:03:45.075200 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 18:03:45.082544 systemd[1]: Started sshd@0-172.31.22.249:22-4.153.228.146:50700.service - OpenSSH per-connection server daemon (4.153.228.146:50700). Jan 16 18:03:45.086490 polkitd[2135]: Started polkitd version 126 Jan 16 18:03:45.094964 amazon-ssm-agent[2009]: 2026-01-16 18:03:44.4377 INFO [amazon-ssm-agent] Starting Core Agent Jan 16 18:03:45.134165 polkitd[2135]: Loading rules from directory /etc/polkit-1/rules.d Jan 16 18:03:45.136034 polkitd[2135]: Loading rules from directory /run/polkit-1/rules.d Jan 16 18:03:45.137328 polkitd[2135]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 16 18:03:45.138084 polkitd[2135]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 16 18:03:45.138211 polkitd[2135]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 16 18:03:45.138325 polkitd[2135]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 16 18:03:45.141243 polkitd[2135]: Finished loading, compiling and executing 2 rules Jan 16 18:03:45.142879 systemd[1]: Started polkit.service - Authorization Manager. Jan 16 18:03:45.149850 dbus-daemon[1923]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 16 18:03:45.156437 polkitd[2135]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 16 18:03:45.154507 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 18:03:45.159276 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 18:03:45.169879 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 18:03:45.192561 amazon-ssm-agent[2009]: 2026-01-16 18:03:44.4377 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 16 18:03:45.234374 systemd-hostnamed[1990]: Hostname set to (transient) Jan 16 18:03:45.236206 systemd-resolved[1538]: System hostname changed to 'ip-172-31-22-249'. Jan 16 18:03:45.238231 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 18:03:45.252566 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 18:03:45.260209 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 16 18:03:45.264635 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 18:03:45.291071 amazon-ssm-agent[2009]: 2026-01-16 18:03:44.4377 INFO [Registrar] Starting registrar module Jan 16 18:03:45.391420 amazon-ssm-agent[2009]: 2026-01-16 18:03:44.4467 INFO [EC2Identity] Checking disk for registration info Jan 16 18:03:45.493047 amazon-ssm-agent[2009]: 2026-01-16 18:03:44.4468 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 16 18:03:45.573432 tar[1941]: linux-arm64/README.md Jan 16 18:03:45.592061 amazon-ssm-agent[2009]: 2026-01-16 18:03:44.4468 INFO [EC2Identity] Generating registration keypair Jan 16 18:03:45.606869 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 18:03:45.643295 amazon-ssm-agent[2009]: 2026/01/16 18:03:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 18:03:45.643295 amazon-ssm-agent[2009]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 18:03:45.643788 amazon-ssm-agent[2009]: 2026/01/16 18:03:45 processing appconfig overrides Jan 16 18:03:45.684597 amazon-ssm-agent[2009]: 2026-01-16 18:03:45.5909 INFO [EC2Identity] Checking write access before registering Jan 16 18:03:45.684597 amazon-ssm-agent[2009]: 2026-01-16 18:03:45.5918 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 16 18:03:45.684597 amazon-ssm-agent[2009]: 2026-01-16 18:03:45.6426 INFO [EC2Identity] EC2 registration was successful. Jan 16 18:03:45.684794 amazon-ssm-agent[2009]: 2026-01-16 18:03:45.6427 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 16 18:03:45.684794 amazon-ssm-agent[2009]: 2026-01-16 18:03:45.6429 INFO [CredentialRefresher] credentialRefresher has started Jan 16 18:03:45.684794 amazon-ssm-agent[2009]: 2026-01-16 18:03:45.6429 INFO [CredentialRefresher] Starting credentials refresher loop Jan 16 18:03:45.684794 amazon-ssm-agent[2009]: 2026-01-16 18:03:45.6841 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 16 18:03:45.684794 amazon-ssm-agent[2009]: 2026-01-16 18:03:45.6845 INFO [CredentialRefresher] Credentials ready Jan 16 18:03:45.692492 amazon-ssm-agent[2009]: 2026-01-16 18:03:45.6847 INFO [CredentialRefresher] Next credential rotation will be in 29.9999916941 minutes Jan 16 18:03:45.702171 sshd[2172]: Accepted publickey for core from 4.153.228.146 port 50700 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:03:45.706773 sshd-session[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:03:45.720647 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 18:03:45.725487 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 18:03:45.747007 systemd-logind[1935]: New session 1 of user core. Jan 16 18:03:45.771200 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 18:03:45.782640 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 18:03:45.808829 (systemd)[2197]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:03:45.814452 systemd-logind[1935]: New session 2 of user core. Jan 16 18:03:46.110543 systemd[2197]: Queued start job for default target default.target. Jan 16 18:03:46.122175 systemd[2197]: Created slice app.slice - User Application Slice. Jan 16 18:03:46.122252 systemd[2197]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 16 18:03:46.122284 systemd[2197]: Reached target paths.target - Paths. Jan 16 18:03:46.122380 systemd[2197]: Reached target timers.target - Timers. Jan 16 18:03:46.124964 systemd[2197]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 18:03:46.127404 systemd[2197]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 16 18:03:46.157729 systemd[2197]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 18:03:46.157940 systemd[2197]: Reached target sockets.target - Sockets. Jan 16 18:03:46.161401 systemd[2197]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 16 18:03:46.161637 systemd[2197]: Reached target basic.target - Basic System. Jan 16 18:03:46.161747 systemd[2197]: Reached target default.target - Main User Target. Jan 16 18:03:46.161831 systemd[2197]: Startup finished in 335ms. Jan 16 18:03:46.162909 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 18:03:46.172473 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 18:03:46.225846 ntpd[1928]: Listen normally on 6 eth0 [fe80::496:b3ff:fef6:c2db%2]:123 Jan 16 18:03:46.226768 ntpd[1928]: 16 Jan 18:03:46 ntpd[1928]: Listen normally on 6 eth0 [fe80::496:b3ff:fef6:c2db%2]:123 Jan 16 18:03:46.450871 systemd[1]: Started sshd@1-172.31.22.249:22-4.153.228.146:42676.service - OpenSSH per-connection server daemon (4.153.228.146:42676). Jan 16 18:03:46.715788 amazon-ssm-agent[2009]: 2026-01-16 18:03:46.7152 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 16 18:03:46.817822 amazon-ssm-agent[2009]: 2026-01-16 18:03:46.7189 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2216) started Jan 16 18:03:46.918672 amazon-ssm-agent[2009]: 2026-01-16 18:03:46.7190 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 16 18:03:46.951776 sshd[2211]: Accepted publickey for core from 4.153.228.146 port 42676 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:03:46.954671 sshd-session[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:03:46.963807 systemd-logind[1935]: New session 3 of user core. Jan 16 18:03:46.972478 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 18:03:47.220713 sshd[2221]: Connection closed by 4.153.228.146 port 42676 Jan 16 18:03:47.219697 sshd-session[2211]: pam_unix(sshd:session): session closed for user core Jan 16 18:03:47.229672 systemd[1]: sshd@1-172.31.22.249:22-4.153.228.146:42676.service: Deactivated successfully. Jan 16 18:03:47.233970 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 18:03:47.237590 systemd-logind[1935]: Session 3 logged out. Waiting for processes to exit. Jan 16 18:03:47.239894 systemd-logind[1935]: Removed session 3. Jan 16 18:03:47.304908 systemd[1]: Started sshd@2-172.31.22.249:22-4.153.228.146:42688.service - OpenSSH per-connection server daemon (4.153.228.146:42688). Jan 16 18:03:47.775989 sshd[2233]: Accepted publickey for core from 4.153.228.146 port 42688 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:03:47.779154 sshd-session[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:03:47.789201 systemd-logind[1935]: New session 4 of user core. Jan 16 18:03:47.797437 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 18:03:48.021182 sshd[2238]: Connection closed by 4.153.228.146 port 42688 Jan 16 18:03:48.023400 sshd-session[2233]: pam_unix(sshd:session): session closed for user core Jan 16 18:03:48.030902 systemd[1]: sshd@2-172.31.22.249:22-4.153.228.146:42688.service: Deactivated successfully. Jan 16 18:03:48.035085 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 18:03:48.038551 systemd-logind[1935]: Session 4 logged out. Waiting for processes to exit. Jan 16 18:03:48.043704 systemd-logind[1935]: Removed session 4. Jan 16 18:03:48.747173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:03:48.753285 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 18:03:48.758169 systemd[1]: Startup finished in 4.014s (kernel) + 8.334s (initrd) + 11.694s (userspace) = 24.043s. Jan 16 18:03:48.772923 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 18:03:50.542893 kubelet[2248]: E0116 18:03:50.542829 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 18:03:50.547713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 18:03:50.548033 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 18:03:50.548719 systemd[1]: kubelet.service: Consumed 1.468s CPU time, 255.9M memory peak. Jan 16 18:03:58.117854 systemd[1]: Started sshd@3-172.31.22.249:22-4.153.228.146:38848.service - OpenSSH per-connection server daemon (4.153.228.146:38848). Jan 16 18:03:58.595166 sshd[2261]: Accepted publickey for core from 4.153.228.146 port 38848 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:03:58.597248 sshd-session[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:03:58.607217 systemd-logind[1935]: New session 5 of user core. Jan 16 18:03:58.614402 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 18:03:58.837045 sshd[2265]: Connection closed by 4.153.228.146 port 38848 Jan 16 18:03:58.838051 sshd-session[2261]: pam_unix(sshd:session): session closed for user core Jan 16 18:03:58.848469 systemd[1]: sshd@3-172.31.22.249:22-4.153.228.146:38848.service: Deactivated successfully. Jan 16 18:03:58.853002 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 18:03:58.858257 systemd-logind[1935]: Session 5 logged out. Waiting for processes to exit. Jan 16 18:03:58.860245 systemd-logind[1935]: Removed session 5. Jan 16 18:03:58.945778 systemd[1]: Started sshd@4-172.31.22.249:22-4.153.228.146:38860.service - OpenSSH per-connection server daemon (4.153.228.146:38860). Jan 16 18:03:59.438076 sshd[2271]: Accepted publickey for core from 4.153.228.146 port 38860 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:03:59.440534 sshd-session[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:03:59.449215 systemd-logind[1935]: New session 6 of user core. Jan 16 18:03:59.458438 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 18:03:59.692483 sshd[2275]: Connection closed by 4.153.228.146 port 38860 Jan 16 18:03:59.691371 sshd-session[2271]: pam_unix(sshd:session): session closed for user core Jan 16 18:03:59.701408 systemd[1]: sshd@4-172.31.22.249:22-4.153.228.146:38860.service: Deactivated successfully. Jan 16 18:03:59.705020 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 18:03:59.707460 systemd-logind[1935]: Session 6 logged out. Waiting for processes to exit. Jan 16 18:03:59.710584 systemd-logind[1935]: Removed session 6. Jan 16 18:03:59.780298 systemd[1]: Started sshd@5-172.31.22.249:22-4.153.228.146:38864.service - OpenSSH per-connection server daemon (4.153.228.146:38864). Jan 16 18:04:00.234874 sshd[2281]: Accepted publickey for core from 4.153.228.146 port 38864 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:04:00.237406 sshd-session[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:04:00.245493 systemd-logind[1935]: New session 7 of user core. Jan 16 18:04:00.255390 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 18:04:00.472689 sshd[2285]: Connection closed by 4.153.228.146 port 38864 Jan 16 18:04:00.473530 sshd-session[2281]: pam_unix(sshd:session): session closed for user core Jan 16 18:04:00.482878 systemd-logind[1935]: Session 7 logged out. Waiting for processes to exit. Jan 16 18:04:00.483647 systemd[1]: sshd@5-172.31.22.249:22-4.153.228.146:38864.service: Deactivated successfully. Jan 16 18:04:00.487163 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 18:04:00.492270 systemd-logind[1935]: Removed session 7. Jan 16 18:04:00.561903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 18:04:00.564726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:04:00.567055 systemd[1]: Started sshd@6-172.31.22.249:22-4.153.228.146:38880.service - OpenSSH per-connection server daemon (4.153.228.146:38880). Jan 16 18:04:00.941861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:04:00.962691 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 18:04:01.032769 kubelet[2302]: E0116 18:04:01.032709 2302 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 18:04:01.039191 sshd[2292]: Accepted publickey for core from 4.153.228.146 port 38880 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:04:01.042527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 18:04:01.042886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 18:04:01.044701 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:04:01.045304 systemd[1]: kubelet.service: Consumed 320ms CPU time, 107.1M memory peak. Jan 16 18:04:01.056205 systemd-logind[1935]: New session 8 of user core. Jan 16 18:04:01.066414 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 18:04:01.218861 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 18:04:01.220396 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 18:04:01.232071 sudo[2311]: pam_unix(sudo:session): session closed for user root Jan 16 18:04:01.309272 sshd[2310]: Connection closed by 4.153.228.146 port 38880 Jan 16 18:04:01.310354 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Jan 16 18:04:01.320002 systemd[1]: sshd@6-172.31.22.249:22-4.153.228.146:38880.service: Deactivated successfully. Jan 16 18:04:01.320524 systemd-logind[1935]: Session 8 logged out. Waiting for processes to exit. Jan 16 18:04:01.326203 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 18:04:01.330348 systemd-logind[1935]: Removed session 8. Jan 16 18:04:01.404562 systemd[1]: Started sshd@7-172.31.22.249:22-4.153.228.146:38886.service - OpenSSH per-connection server daemon (4.153.228.146:38886). Jan 16 18:04:01.880844 sshd[2318]: Accepted publickey for core from 4.153.228.146 port 38886 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:04:01.882742 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:04:01.890642 systemd-logind[1935]: New session 9 of user core. Jan 16 18:04:01.904403 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 18:04:02.048000 sudo[2324]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 18:04:02.048768 sudo[2324]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 18:04:02.054582 sudo[2324]: pam_unix(sudo:session): session closed for user root Jan 16 18:04:02.067076 sudo[2323]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 16 18:04:02.067756 sudo[2323]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 18:04:02.083693 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 18:04:02.155180 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 16 18:04:02.155332 kernel: audit: type=1305 audit(1768586642.149:229): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 16 18:04:02.149000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 16 18:04:02.155532 augenrules[2348]: No rules Jan 16 18:04:02.155807 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 18:04:02.149000 audit[2348]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd97305b0 a2=420 a3=0 items=0 ppid=2329 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:02.162761 kernel: audit: type=1300 audit(1768586642.149:229): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd97305b0 a2=420 a3=0 items=0 ppid=2329 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:02.156309 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 18:04:02.149000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 18:04:02.167537 kernel: audit: type=1327 audit(1768586642.149:229): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 18:04:02.164560 sudo[2323]: pam_unix(sudo:session): session closed for user root Jan 16 18:04:02.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:02.172380 kernel: audit: type=1130 audit(1768586642.155:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:02.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:02.176952 kernel: audit: type=1131 audit(1768586642.155:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:02.163000 audit[2323]: USER_END pid=2323 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:04:02.177134 kernel: audit: type=1106 audit(1768586642.163:232): pid=2323 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:04:02.163000 audit[2323]: CRED_DISP pid=2323 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:04:02.187150 kernel: audit: type=1104 audit(1768586642.163:233): pid=2323 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:04:02.244176 sshd[2322]: Connection closed by 4.153.228.146 port 38886 Jan 16 18:04:02.246375 sshd-session[2318]: pam_unix(sshd:session): session closed for user core Jan 16 18:04:02.247000 audit[2318]: USER_END pid=2318 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:04:02.248000 audit[2318]: CRED_DISP pid=2318 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:04:02.264611 kernel: audit: type=1106 audit(1768586642.247:234): pid=2318 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:04:02.264707 kernel: audit: type=1104 audit(1768586642.248:235): pid=2318 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:04:02.265768 systemd[1]: sshd@7-172.31.22.249:22-4.153.228.146:38886.service: Deactivated successfully. Jan 16 18:04:02.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.22.249:22-4.153.228.146:38886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:02.269442 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 18:04:02.272353 kernel: audit: type=1131 audit(1768586642.265:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.22.249:22-4.153.228.146:38886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:02.274243 systemd-logind[1935]: Session 9 logged out. Waiting for processes to exit. Jan 16 18:04:02.278399 systemd-logind[1935]: Removed session 9. Jan 16 18:04:02.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.22.249:22-4.153.228.146:38900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:02.361304 systemd[1]: Started sshd@8-172.31.22.249:22-4.153.228.146:38900.service - OpenSSH per-connection server daemon (4.153.228.146:38900). Jan 16 18:04:02.860000 audit[2357]: USER_ACCT pid=2357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:04:02.861839 sshd[2357]: Accepted publickey for core from 4.153.228.146 port 38900 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:04:02.862000 audit[2357]: CRED_ACQ pid=2357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:04:02.862000 audit[2357]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc02e5990 a2=3 a3=0 items=0 ppid=1 pid=2357 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:02.862000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:04:02.865050 sshd-session[2357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:04:02.873144 systemd-logind[1935]: New session 10 of user core. Jan 16 18:04:02.882394 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 18:04:02.887000 audit[2357]: USER_START pid=2357 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:04:02.890000 audit[2361]: CRED_ACQ pid=2361 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:04:03.038000 audit[2362]: USER_ACCT pid=2362 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:04:03.038000 audit[2362]: CRED_REFR pid=2362 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:04:03.039000 audit[2362]: USER_START pid=2362 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:04:03.039805 sudo[2362]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 18:04:03.040469 sudo[2362]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 18:04:03.541750 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 18:04:03.558883 (dockerd)[2380]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 18:04:03.946452 dockerd[2380]: time="2026-01-16T18:04:03.946357768Z" level=info msg="Starting up" Jan 16 18:04:03.949686 dockerd[2380]: time="2026-01-16T18:04:03.949575280Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 16 18:04:03.969876 dockerd[2380]: time="2026-01-16T18:04:03.969797226Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 16 18:04:04.057376 dockerd[2380]: time="2026-01-16T18:04:04.057200511Z" level=info msg="Loading containers: start." Jan 16 18:04:04.073169 kernel: Initializing XFRM netlink socket Jan 16 18:04:04.160000 audit[2430]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.160000 audit[2430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd9d50a70 a2=0 a3=0 items=0 ppid=2380 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.160000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 16 18:04:04.164000 audit[2432]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.164000 audit[2432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffce9c73e0 a2=0 a3=0 items=0 ppid=2380 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.164000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 16 18:04:04.168000 audit[2434]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.168000 audit[2434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb44dc00 a2=0 a3=0 items=0 ppid=2380 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.168000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 16 18:04:04.172000 audit[2436]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.172000 audit[2436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb9ebe90 a2=0 a3=0 items=0 ppid=2380 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.172000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 16 18:04:04.177000 audit[2438]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.177000 audit[2438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdc596430 a2=0 a3=0 items=0 ppid=2380 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.177000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 16 18:04:04.181000 audit[2440]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.181000 audit[2440]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd9e17400 a2=0 a3=0 items=0 ppid=2380 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.181000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 18:04:04.186000 audit[2442]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.186000 audit[2442]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff7b948e0 a2=0 a3=0 items=0 ppid=2380 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.186000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 18:04:04.192000 audit[2444]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.192000 audit[2444]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffcb49bb70 a2=0 a3=0 items=0 ppid=2380 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.192000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 16 18:04:04.231000 audit[2447]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.231000 audit[2447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffd4d41e10 a2=0 a3=0 items=0 ppid=2380 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.231000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 16 18:04:04.236000 audit[2449]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.236000 audit[2449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffdf209d80 a2=0 a3=0 items=0 ppid=2380 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.236000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 16 18:04:04.241000 audit[2451]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.241000 audit[2451]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd6d9c870 a2=0 a3=0 items=0 ppid=2380 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.241000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 16 18:04:04.246000 audit[2453]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.246000 audit[2453]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffcef27e60 a2=0 a3=0 items=0 ppid=2380 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.246000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 18:04:04.250000 audit[2455]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.250000 audit[2455]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffba72aa0 a2=0 a3=0 items=0 ppid=2380 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.250000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 16 18:04:04.318000 audit[2485]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:04.318000 audit[2485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff91894f0 a2=0 a3=0 items=0 ppid=2380 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.318000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 16 18:04:04.322000 audit[2487]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:04.322000 audit[2487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe992d520 a2=0 a3=0 items=0 ppid=2380 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.322000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 16 18:04:04.327000 audit[2489]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:04.327000 audit[2489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1332da0 a2=0 a3=0 items=0 ppid=2380 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.327000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 16 18:04:04.331000 audit[2491]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:04.331000 audit[2491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff4c7df0 a2=0 a3=0 items=0 ppid=2380 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.331000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 16 18:04:04.335000 audit[2493]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:04.335000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffda29e520 a2=0 a3=0 items=0 ppid=2380 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.335000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 16 18:04:04.339000 audit[2495]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:04.339000 audit[2495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdb3daaf0 a2=0 a3=0 items=0 ppid=2380 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.339000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 18:04:04.343000 audit[2497]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:04.343000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffeeb24f70 a2=0 a3=0 items=0 ppid=2380 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.343000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 18:04:04.347000 audit[2499]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:04.347000 audit[2499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd2e9cb10 a2=0 a3=0 items=0 ppid=2380 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.347000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 16 18:04:04.352000 audit[2501]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:04.352000 audit[2501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffccab41d0 a2=0 a3=0 items=0 ppid=2380 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.352000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 16 18:04:04.356000 audit[2503]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:04.356000 audit[2503]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe68e5360 a2=0 a3=0 items=0 ppid=2380 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.356000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 16 18:04:04.361000 audit[2505]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:04.361000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff94afaa0 a2=0 a3=0 items=0 ppid=2380 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.361000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 16 18:04:04.365000 audit[2507]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:04.365000 audit[2507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffeda61580 a2=0 a3=0 items=0 ppid=2380 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.365000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 18:04:04.370000 audit[2509]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:04.370000 audit[2509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffed8f8c80 a2=0 a3=0 items=0 ppid=2380 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.370000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 16 18:04:04.381000 audit[2514]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.381000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffee6aa8b0 a2=0 a3=0 items=0 ppid=2380 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.381000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 16 18:04:04.386000 audit[2516]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.386000 audit[2516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd7c93ca0 a2=0 a3=0 items=0 ppid=2380 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.386000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 16 18:04:04.390000 audit[2518]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.390000 audit[2518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe0d06e10 a2=0 a3=0 items=0 ppid=2380 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.390000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 16 18:04:04.395000 audit[2520]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:04.395000 audit[2520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff441fe70 a2=0 a3=0 items=0 ppid=2380 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.395000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 16 18:04:04.400000 audit[2522]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2522 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:04.400000 audit[2522]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc0a46be0 a2=0 a3=0 items=0 ppid=2380 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.400000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 16 18:04:04.404000 audit[2524]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2524 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:04.404000 audit[2524]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc3a29c90 a2=0 a3=0 items=0 ppid=2380 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.404000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 16 18:04:04.424820 (udev-worker)[2402]: Network interface NamePolicy= disabled on kernel command line. Jan 16 18:04:04.444000 audit[2528]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.444000 audit[2528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffea293270 a2=0 a3=0 items=0 ppid=2380 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.444000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 16 18:04:04.449000 audit[2530]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.449000 audit[2530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffcbbd65d0 a2=0 a3=0 items=0 ppid=2380 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.449000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 16 18:04:04.466000 audit[2538]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.466000 audit[2538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffecfd70a0 a2=0 a3=0 items=0 ppid=2380 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.466000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 16 18:04:04.484000 audit[2544]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.484000 audit[2544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc15e87b0 a2=0 a3=0 items=0 ppid=2380 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.484000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 16 18:04:04.491000 audit[2546]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.491000 audit[2546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffc838f690 a2=0 a3=0 items=0 ppid=2380 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.491000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 16 18:04:04.495000 audit[2548]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.495000 audit[2548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc72a57f0 a2=0 a3=0 items=0 ppid=2380 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.495000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 16 18:04:04.499000 audit[2550]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.499000 audit[2550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc51f14d0 a2=0 a3=0 items=0 ppid=2380 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.499000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 18:04:04.504000 audit[2552]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:04.504000 audit[2552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc0012530 a2=0 a3=0 items=0 ppid=2380 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.504000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 16 18:04:04.506345 systemd-networkd[1870]: docker0: Link UP Jan 16 18:04:04.518496 dockerd[2380]: time="2026-01-16T18:04:04.518384495Z" level=info msg="Loading containers: done." Jan 16 18:04:04.545526 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3512290530-merged.mount: Deactivated successfully. Jan 16 18:04:04.576530 dockerd[2380]: time="2026-01-16T18:04:04.576413491Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 18:04:04.577146 dockerd[2380]: time="2026-01-16T18:04:04.576824660Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 16 18:04:04.577268 dockerd[2380]: time="2026-01-16T18:04:04.577241965Z" level=info msg="Initializing buildkit" Jan 16 18:04:04.628411 dockerd[2380]: time="2026-01-16T18:04:04.628305656Z" level=info msg="Completed buildkit initialization" Jan 16 18:04:04.645931 dockerd[2380]: time="2026-01-16T18:04:04.645839939Z" level=info msg="Daemon has completed initialization" Jan 16 18:04:04.646345 dockerd[2380]: time="2026-01-16T18:04:04.646149214Z" level=info msg="API listen on /run/docker.sock" Jan 16 18:04:04.647025 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 18:04:04.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:06.710239 containerd[1954]: time="2026-01-16T18:04:06.709620448Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 16 18:04:07.617797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327923186.mount: Deactivated successfully. Jan 16 18:04:08.949157 containerd[1954]: time="2026-01-16T18:04:08.948451560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:08.951358 containerd[1954]: time="2026-01-16T18:04:08.951290739Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=24845792" Jan 16 18:04:08.953930 containerd[1954]: time="2026-01-16T18:04:08.953882354Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:08.959298 containerd[1954]: time="2026-01-16T18:04:08.959249684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:08.961181 containerd[1954]: time="2026-01-16T18:04:08.961106845Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.251413676s" Jan 16 18:04:08.961296 containerd[1954]: time="2026-01-16T18:04:08.961184908Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 16 18:04:08.962329 containerd[1954]: time="2026-01-16T18:04:08.962220257Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 16 18:04:10.611390 containerd[1954]: time="2026-01-16T18:04:10.611311962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:10.614170 containerd[1954]: time="2026-01-16T18:04:10.613997128Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22613932" Jan 16 18:04:10.616322 containerd[1954]: time="2026-01-16T18:04:10.616249034Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:10.621333 containerd[1954]: time="2026-01-16T18:04:10.621255704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:10.624201 containerd[1954]: time="2026-01-16T18:04:10.623165258Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.660883614s" Jan 16 18:04:10.624201 containerd[1954]: time="2026-01-16T18:04:10.623224616Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 16 18:04:10.624893 containerd[1954]: time="2026-01-16T18:04:10.624690584Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 16 18:04:11.069215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 18:04:11.073061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:04:11.426831 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 16 18:04:11.426965 kernel: audit: type=1130 audit(1768586651.419:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:11.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:11.419458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:04:11.433668 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 18:04:11.509264 kubelet[2663]: E0116 18:04:11.509149 2663 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 18:04:11.513588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 18:04:11.513898 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 18:04:11.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 18:04:11.514801 systemd[1]: kubelet.service: Consumed 313ms CPU time, 106.9M memory peak. Jan 16 18:04:11.521162 kernel: audit: type=1131 audit(1768586651.514:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 18:04:12.206653 containerd[1954]: time="2026-01-16T18:04:12.206601395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:12.209521 containerd[1954]: time="2026-01-16T18:04:12.209480230Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17608611" Jan 16 18:04:12.212143 containerd[1954]: time="2026-01-16T18:04:12.212080838Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:12.217630 containerd[1954]: time="2026-01-16T18:04:12.217578673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:12.219406 containerd[1954]: time="2026-01-16T18:04:12.219349246Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.594339843s" Jan 16 18:04:12.219496 containerd[1954]: time="2026-01-16T18:04:12.219403513Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 16 18:04:12.220030 containerd[1954]: time="2026-01-16T18:04:12.219975251Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 16 18:04:13.548685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3417853233.mount: Deactivated successfully. Jan 16 18:04:14.118400 containerd[1954]: time="2026-01-16T18:04:14.118322960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:14.120711 containerd[1954]: time="2026-01-16T18:04:14.120625254Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=17713718" Jan 16 18:04:14.121539 containerd[1954]: time="2026-01-16T18:04:14.121482135Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:14.129147 containerd[1954]: time="2026-01-16T18:04:14.128632248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:14.131110 containerd[1954]: time="2026-01-16T18:04:14.131021574Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.910987793s" Jan 16 18:04:14.131275 containerd[1954]: time="2026-01-16T18:04:14.131097356Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 16 18:04:14.132847 containerd[1954]: time="2026-01-16T18:04:14.132700661Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 16 18:04:14.721065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount573242518.mount: Deactivated successfully. Jan 16 18:04:15.272357 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 16 18:04:15.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:15.285157 kernel: audit: type=1131 audit(1768586655.273:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:15.291000 audit: BPF prog-id=62 op=UNLOAD Jan 16 18:04:15.294137 kernel: audit: type=1334 audit(1768586655.291:290): prog-id=62 op=UNLOAD Jan 16 18:04:16.040068 containerd[1954]: time="2026-01-16T18:04:16.039875949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:16.043201 containerd[1954]: time="2026-01-16T18:04:16.043104267Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956282" Jan 16 18:04:16.045564 containerd[1954]: time="2026-01-16T18:04:16.045276524Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:16.050280 containerd[1954]: time="2026-01-16T18:04:16.050227535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:16.052288 containerd[1954]: time="2026-01-16T18:04:16.052240461Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.919113875s" Jan 16 18:04:16.052687 containerd[1954]: time="2026-01-16T18:04:16.052425809Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 16 18:04:16.052998 containerd[1954]: time="2026-01-16T18:04:16.052948503Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 16 18:04:16.554303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4043149760.mount: Deactivated successfully. Jan 16 18:04:16.568082 containerd[1954]: time="2026-01-16T18:04:16.568004958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 18:04:16.571696 containerd[1954]: time="2026-01-16T18:04:16.571346564Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 16 18:04:16.573971 containerd[1954]: time="2026-01-16T18:04:16.573914791Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 18:04:16.578631 containerd[1954]: time="2026-01-16T18:04:16.578584369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 18:04:16.580279 containerd[1954]: time="2026-01-16T18:04:16.579834458Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 526.729469ms" Jan 16 18:04:16.580279 containerd[1954]: time="2026-01-16T18:04:16.579889193Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 16 18:04:16.580706 containerd[1954]: time="2026-01-16T18:04:16.580652799Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 16 18:04:17.404477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2340237115.mount: Deactivated successfully. Jan 16 18:04:19.926156 containerd[1954]: time="2026-01-16T18:04:19.925603685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:19.929857 containerd[1954]: time="2026-01-16T18:04:19.929778748Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=56456774" Jan 16 18:04:19.932096 containerd[1954]: time="2026-01-16T18:04:19.932011708Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:19.941575 containerd[1954]: time="2026-01-16T18:04:19.941483650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:19.943801 containerd[1954]: time="2026-01-16T18:04:19.943609264Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.362898608s" Jan 16 18:04:19.943801 containerd[1954]: time="2026-01-16T18:04:19.943660121Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 16 18:04:21.569249 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 16 18:04:21.573482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:04:21.942499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:04:21.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:21.953157 kernel: audit: type=1130 audit(1768586661.941:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:21.959547 (kubelet)[2822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 18:04:22.040871 kubelet[2822]: E0116 18:04:22.040785 2822 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 18:04:22.045401 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 18:04:22.045702 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 18:04:22.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 18:04:22.050827 systemd[1]: kubelet.service: Consumed 294ms CPU time, 107.1M memory peak. Jan 16 18:04:22.052160 kernel: audit: type=1131 audit(1768586662.045:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 18:04:27.699738 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:04:27.700175 systemd[1]: kubelet.service: Consumed 294ms CPU time, 107.1M memory peak. Jan 16 18:04:27.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:27.711701 kernel: audit: type=1130 audit(1768586667.699:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:27.711845 kernel: audit: type=1131 audit(1768586667.700:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:27.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:27.710564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:04:27.764083 systemd[1]: Reload requested from client PID 2836 ('systemctl') (unit session-10.scope)... Jan 16 18:04:27.764140 systemd[1]: Reloading... Jan 16 18:04:28.033175 zram_generator::config[2889]: No configuration found. Jan 16 18:04:28.509827 systemd[1]: Reloading finished in 745 ms. Jan 16 18:04:28.549000 audit: BPF prog-id=66 op=LOAD Jan 16 18:04:28.560047 kernel: audit: type=1334 audit(1768586668.549:295): prog-id=66 op=LOAD Jan 16 18:04:28.560188 kernel: audit: type=1334 audit(1768586668.549:296): prog-id=65 op=UNLOAD Jan 16 18:04:28.549000 audit: BPF prog-id=65 op=UNLOAD Jan 16 18:04:28.564000 audit: BPF prog-id=67 op=LOAD Jan 16 18:04:28.564000 audit: BPF prog-id=55 op=UNLOAD Jan 16 18:04:28.569065 kernel: audit: type=1334 audit(1768586668.564:297): prog-id=67 op=LOAD Jan 16 18:04:28.569187 kernel: audit: type=1334 audit(1768586668.564:298): prog-id=55 op=UNLOAD Jan 16 18:04:28.569237 kernel: audit: type=1334 audit(1768586668.566:299): prog-id=68 op=LOAD Jan 16 18:04:28.566000 audit: BPF prog-id=68 op=LOAD Jan 16 18:04:28.576977 kernel: audit: type=1334 audit(1768586668.566:300): prog-id=69 op=LOAD Jan 16 18:04:28.577110 kernel: audit: type=1334 audit(1768586668.566:301): prog-id=56 op=UNLOAD Jan 16 18:04:28.577196 kernel: audit: type=1334 audit(1768586668.566:302): prog-id=57 op=UNLOAD Jan 16 18:04:28.566000 audit: BPF prog-id=69 op=LOAD Jan 16 18:04:28.566000 audit: BPF prog-id=56 op=UNLOAD Jan 16 18:04:28.566000 audit: BPF prog-id=57 op=UNLOAD Jan 16 18:04:28.570000 audit: BPF prog-id=70 op=LOAD Jan 16 18:04:28.574000 audit: BPF prog-id=71 op=LOAD Jan 16 18:04:28.574000 audit: BPF prog-id=43 op=UNLOAD Jan 16 18:04:28.574000 audit: BPF prog-id=44 op=UNLOAD Jan 16 18:04:28.575000 audit: BPF prog-id=72 op=LOAD Jan 16 18:04:28.576000 audit: BPF prog-id=49 op=UNLOAD Jan 16 18:04:28.577000 audit: BPF prog-id=73 op=LOAD Jan 16 18:04:28.577000 audit: BPF prog-id=74 op=LOAD Jan 16 18:04:28.577000 audit: BPF prog-id=50 op=UNLOAD Jan 16 18:04:28.577000 audit: BPF prog-id=51 op=UNLOAD Jan 16 18:04:28.579000 audit: BPF prog-id=75 op=LOAD Jan 16 18:04:28.594000 audit: BPF prog-id=58 op=UNLOAD Jan 16 18:04:28.595000 audit: BPF prog-id=76 op=LOAD Jan 16 18:04:28.595000 audit: BPF prog-id=52 op=UNLOAD Jan 16 18:04:28.596000 audit: BPF prog-id=77 op=LOAD Jan 16 18:04:28.596000 audit: BPF prog-id=78 op=LOAD Jan 16 18:04:28.596000 audit: BPF prog-id=53 op=UNLOAD Jan 16 18:04:28.596000 audit: BPF prog-id=54 op=UNLOAD Jan 16 18:04:28.597000 audit: BPF prog-id=79 op=LOAD Jan 16 18:04:28.597000 audit: BPF prog-id=45 op=UNLOAD Jan 16 18:04:28.597000 audit: BPF prog-id=80 op=LOAD Jan 16 18:04:28.597000 audit: BPF prog-id=81 op=LOAD Jan 16 18:04:28.597000 audit: BPF prog-id=46 op=UNLOAD Jan 16 18:04:28.598000 audit: BPF prog-id=47 op=UNLOAD Jan 16 18:04:28.601000 audit: BPF prog-id=82 op=LOAD Jan 16 18:04:28.601000 audit: BPF prog-id=59 op=UNLOAD Jan 16 18:04:28.602000 audit: BPF prog-id=83 op=LOAD Jan 16 18:04:28.602000 audit: BPF prog-id=84 op=LOAD Jan 16 18:04:28.602000 audit: BPF prog-id=60 op=UNLOAD Jan 16 18:04:28.602000 audit: BPF prog-id=61 op=UNLOAD Jan 16 18:04:28.603000 audit: BPF prog-id=85 op=LOAD Jan 16 18:04:28.603000 audit: BPF prog-id=48 op=UNLOAD Jan 16 18:04:28.632268 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 18:04:28.632470 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 18:04:28.633158 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:04:28.633259 systemd[1]: kubelet.service: Consumed 232ms CPU time, 95.4M memory peak. Jan 16 18:04:28.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 18:04:28.637982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:04:28.726802 update_engine[1936]: I20260116 18:04:28.726620 1936 update_attempter.cc:509] Updating boot flags... Jan 16 18:04:29.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:29.168499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:04:29.202586 (kubelet)[3045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 18:04:29.276861 kubelet[3045]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 18:04:29.277767 kubelet[3045]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 18:04:29.277767 kubelet[3045]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 18:04:29.277767 kubelet[3045]: I0116 18:04:29.277501 3045 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 18:04:31.554705 kubelet[3045]: I0116 18:04:31.554652 3045 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 16 18:04:31.556206 kubelet[3045]: I0116 18:04:31.555327 3045 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 18:04:31.556206 kubelet[3045]: I0116 18:04:31.555828 3045 server.go:954] "Client rotation is on, will bootstrap in background" Jan 16 18:04:31.598399 kubelet[3045]: E0116 18:04:31.598325 3045 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.22.249:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.249:6443: connect: connection refused" logger="UnhandledError" Jan 16 18:04:31.604956 kubelet[3045]: I0116 18:04:31.604915 3045 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 18:04:31.619167 kubelet[3045]: I0116 18:04:31.618346 3045 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 16 18:04:31.624133 kubelet[3045]: I0116 18:04:31.624081 3045 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 18:04:31.624761 kubelet[3045]: I0116 18:04:31.624714 3045 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 18:04:31.625149 kubelet[3045]: I0116 18:04:31.624853 3045 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-249","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 18:04:31.625517 kubelet[3045]: I0116 18:04:31.625496 3045 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 18:04:31.625617 kubelet[3045]: I0116 18:04:31.625600 3045 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 18:04:31.626058 kubelet[3045]: I0116 18:04:31.626038 3045 state_mem.go:36] "Initialized new in-memory state store" Jan 16 18:04:31.631824 kubelet[3045]: I0116 18:04:31.631789 3045 kubelet.go:446] "Attempting to sync node with API server" Jan 16 18:04:31.631977 kubelet[3045]: I0116 18:04:31.631957 3045 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 18:04:31.632143 kubelet[3045]: I0116 18:04:31.632100 3045 kubelet.go:352] "Adding apiserver pod source" Jan 16 18:04:31.632258 kubelet[3045]: I0116 18:04:31.632240 3045 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 18:04:31.636164 kubelet[3045]: W0116 18:04:31.635415 3045 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-249&limit=500&resourceVersion=0": dial tcp 172.31.22.249:6443: connect: connection refused Jan 16 18:04:31.636164 kubelet[3045]: E0116 18:04:31.635535 3045 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-249&limit=500&resourceVersion=0\": dial tcp 172.31.22.249:6443: connect: connection refused" logger="UnhandledError" Jan 16 18:04:31.637367 kubelet[3045]: W0116 18:04:31.637294 3045 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.249:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.249:6443: connect: connection refused Jan 16 18:04:31.637477 kubelet[3045]: E0116 18:04:31.637388 3045 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.249:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.249:6443: connect: connection refused" logger="UnhandledError" Jan 16 18:04:31.639154 kubelet[3045]: I0116 18:04:31.638145 3045 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 16 18:04:31.639264 kubelet[3045]: I0116 18:04:31.639162 3045 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 18:04:31.639410 kubelet[3045]: W0116 18:04:31.639379 3045 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 18:04:31.642888 kubelet[3045]: I0116 18:04:31.642828 3045 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 18:04:31.642888 kubelet[3045]: I0116 18:04:31.642892 3045 server.go:1287] "Started kubelet" Jan 16 18:04:31.656919 kubelet[3045]: I0116 18:04:31.656864 3045 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 18:04:31.659353 kubelet[3045]: E0116 18:04:31.658868 3045 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.249:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.249:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-249.188b48351dc36715 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-249,UID:ip-172-31-22-249,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-249,},FirstTimestamp:2026-01-16 18:04:31.642863381 +0000 UTC m=+2.433640806,LastTimestamp:2026-01-16 18:04:31.642863381 +0000 UTC m=+2.433640806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-249,}" Jan 16 18:04:31.664000 audit[3057]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:31.664000 audit[3057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffcfb880f0 a2=0 a3=0 items=0 ppid=3045 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:31.664000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 16 18:04:31.666000 audit[3058]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:31.666000 audit[3058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa406ca0 a2=0 a3=0 items=0 ppid=3045 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:31.666000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 16 18:04:31.667277 kubelet[3045]: I0116 18:04:31.666798 3045 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 18:04:31.667919 kubelet[3045]: I0116 18:04:31.667878 3045 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 18:04:31.669151 kubelet[3045]: E0116 18:04:31.668826 3045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-249\" not found" Jan 16 18:04:31.670647 kubelet[3045]: I0116 18:04:31.670614 3045 server.go:479] "Adding debug handlers to kubelet server" Jan 16 18:04:31.670000 audit[3060]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:31.670000 audit[3060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff7598310 a2=0 a3=0 items=0 ppid=3045 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:31.670000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 18:04:31.672143 kubelet[3045]: I0116 18:04:31.671944 3045 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 18:04:31.672143 kubelet[3045]: I0116 18:04:31.672066 3045 reconciler.go:26] "Reconciler: start to sync state" Jan 16 18:04:31.673940 kubelet[3045]: I0116 18:04:31.673891 3045 factory.go:221] Registration of the systemd container factory successfully Jan 16 18:04:31.674134 kubelet[3045]: I0116 18:04:31.674073 3045 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 18:04:31.675758 kubelet[3045]: I0116 18:04:31.675597 3045 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 18:04:31.677163 kubelet[3045]: E0116 18:04:31.676699 3045 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-249?timeout=10s\": dial tcp 172.31.22.249:6443: connect: connection refused" interval="200ms" Jan 16 18:04:31.677163 kubelet[3045]: W0116 18:04:31.676864 3045 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.249:6443: connect: connection refused Jan 16 18:04:31.677163 kubelet[3045]: E0116 18:04:31.676937 3045 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.249:6443: connect: connection refused" logger="UnhandledError" Jan 16 18:04:31.677985 kubelet[3045]: I0116 18:04:31.677935 3045 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 18:04:31.678759 kubelet[3045]: I0116 18:04:31.678707 3045 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 18:04:31.678980 kubelet[3045]: E0116 18:04:31.678932 3045 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 18:04:31.682161 kubelet[3045]: I0116 18:04:31.681147 3045 factory.go:221] Registration of the containerd container factory successfully Jan 16 18:04:31.682000 audit[3062]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:31.682000 audit[3062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc7afbdf0 a2=0 a3=0 items=0 ppid=3045 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:31.682000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 18:04:31.706000 audit[3067]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:31.706000 audit[3067]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffdac349d0 a2=0 a3=0 items=0 ppid=3045 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:31.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 16 18:04:31.708672 kubelet[3045]: I0116 18:04:31.708610 3045 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 18:04:31.711000 audit[3068]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:31.711000 audit[3068]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd85c0080 a2=0 a3=0 items=0 ppid=3045 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:31.711000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 16 18:04:31.714815 kubelet[3045]: I0116 18:04:31.714754 3045 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 18:04:31.714938 kubelet[3045]: I0116 18:04:31.714830 3045 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 16 18:04:31.714938 kubelet[3045]: I0116 18:04:31.714865 3045 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 18:04:31.714938 kubelet[3045]: I0116 18:04:31.714881 3045 kubelet.go:2382] "Starting kubelet main sync loop" Jan 16 18:04:31.715092 kubelet[3045]: E0116 18:04:31.715001 3045 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 18:04:31.715000 audit[3069]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=3069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:31.715000 audit[3069]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffefae4d20 a2=0 a3=0 items=0 ppid=3045 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:31.715000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 16 18:04:31.719346 kubelet[3045]: W0116 18:04:31.719282 3045 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.249:6443: connect: connection refused Jan 16 18:04:31.719505 kubelet[3045]: E0116 18:04:31.719361 3045 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.249:6443: connect: connection refused" logger="UnhandledError" Jan 16 18:04:31.720000 audit[3070]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:31.720000 audit[3070]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe68cca0 a2=0 a3=0 items=0 ppid=3045 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:31.720000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 16 18:04:31.725000 audit[3071]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:31.725000 audit[3071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf7ab850 a2=0 a3=0 items=0 ppid=3045 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:31.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 16 18:04:31.727000 audit[3074]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:31.727000 audit[3074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff75f4240 a2=0 a3=0 items=0 ppid=3045 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:31.727000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 16 18:04:31.730009 kubelet[3045]: I0116 18:04:31.729600 3045 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 18:04:31.730009 kubelet[3045]: I0116 18:04:31.729631 3045 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 18:04:31.730009 kubelet[3045]: I0116 18:04:31.729690 3045 state_mem.go:36] "Initialized new in-memory state store" Jan 16 18:04:31.731000 audit[3076]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:31.731000 audit[3076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc0c42530 a2=0 a3=0 items=0 ppid=3045 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:31.731000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 16 18:04:31.732000 audit[3077]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:31.732000 audit[3077]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd8ff7450 a2=0 a3=0 items=0 ppid=3045 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:31.732000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 16 18:04:31.739841 kubelet[3045]: I0116 18:04:31.739419 3045 policy_none.go:49] "None policy: Start" Jan 16 18:04:31.739841 kubelet[3045]: I0116 18:04:31.739457 3045 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 18:04:31.739841 kubelet[3045]: I0116 18:04:31.739480 3045 state_mem.go:35] "Initializing new in-memory state store" Jan 16 18:04:31.752803 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 18:04:31.770106 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 18:04:31.771425 kubelet[3045]: E0116 18:04:31.769481 3045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-249\" not found" Jan 16 18:04:31.777568 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 18:04:31.799819 kubelet[3045]: I0116 18:04:31.799785 3045 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 18:04:31.800273 kubelet[3045]: I0116 18:04:31.800249 3045 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 18:04:31.800448 kubelet[3045]: I0116 18:04:31.800395 3045 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 18:04:31.802006 kubelet[3045]: I0116 18:04:31.801624 3045 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 18:04:31.806637 kubelet[3045]: E0116 18:04:31.806093 3045 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 18:04:31.806637 kubelet[3045]: E0116 18:04:31.806198 3045 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-249\" not found" Jan 16 18:04:31.833738 systemd[1]: Created slice kubepods-burstable-podfbb1bcdf27c3060f56af4c17a57fd017.slice - libcontainer container kubepods-burstable-podfbb1bcdf27c3060f56af4c17a57fd017.slice. Jan 16 18:04:31.850675 kubelet[3045]: E0116 18:04:31.850627 3045 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-249\" not found" node="ip-172-31-22-249" Jan 16 18:04:31.855305 systemd[1]: Created slice kubepods-burstable-podbe23bef16f9d7ff5eda4ba2452d085e3.slice - libcontainer container kubepods-burstable-podbe23bef16f9d7ff5eda4ba2452d085e3.slice. Jan 16 18:04:31.861340 kubelet[3045]: E0116 18:04:31.861297 3045 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-249\" not found" node="ip-172-31-22-249" Jan 16 18:04:31.867060 systemd[1]: Created slice kubepods-burstable-pod3849975482addc5f9bd23e0ab28ec184.slice - libcontainer container kubepods-burstable-pod3849975482addc5f9bd23e0ab28ec184.slice. Jan 16 18:04:31.871429 kubelet[3045]: E0116 18:04:31.871261 3045 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-249\" not found" node="ip-172-31-22-249" Jan 16 18:04:31.877629 kubelet[3045]: E0116 18:04:31.877583 3045 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-249?timeout=10s\": dial tcp 172.31.22.249:6443: connect: connection refused" interval="400ms" Jan 16 18:04:31.902719 kubelet[3045]: I0116 18:04:31.902677 3045 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-249" Jan 16 18:04:31.903950 kubelet[3045]: E0116 18:04:31.903895 3045 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.249:6443/api/v1/nodes\": dial tcp 172.31.22.249:6443: connect: connection refused" node="ip-172-31-22-249" Jan 16 18:04:31.973729 kubelet[3045]: I0116 18:04:31.973262 3045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbb1bcdf27c3060f56af4c17a57fd017-ca-certs\") pod \"kube-apiserver-ip-172-31-22-249\" (UID: \"fbb1bcdf27c3060f56af4c17a57fd017\") " pod="kube-system/kube-apiserver-ip-172-31-22-249" Jan 16 18:04:31.973729 kubelet[3045]: I0116 18:04:31.973312 3045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbb1bcdf27c3060f56af4c17a57fd017-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-249\" (UID: \"fbb1bcdf27c3060f56af4c17a57fd017\") " pod="kube-system/kube-apiserver-ip-172-31-22-249" Jan 16 18:04:31.973729 kubelet[3045]: I0116 18:04:31.973351 3045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbb1bcdf27c3060f56af4c17a57fd017-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-249\" (UID: \"fbb1bcdf27c3060f56af4c17a57fd017\") " pod="kube-system/kube-apiserver-ip-172-31-22-249" Jan 16 18:04:31.973729 kubelet[3045]: I0116 18:04:31.973393 3045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be23bef16f9d7ff5eda4ba2452d085e3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-249\" (UID: \"be23bef16f9d7ff5eda4ba2452d085e3\") " pod="kube-system/kube-controller-manager-ip-172-31-22-249" Jan 16 18:04:31.973729 kubelet[3045]: I0116 18:04:31.973429 3045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be23bef16f9d7ff5eda4ba2452d085e3-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-249\" (UID: \"be23bef16f9d7ff5eda4ba2452d085e3\") " pod="kube-system/kube-controller-manager-ip-172-31-22-249" Jan 16 18:04:31.974051 kubelet[3045]: I0116 18:04:31.973463 3045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/be23bef16f9d7ff5eda4ba2452d085e3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-249\" (UID: \"be23bef16f9d7ff5eda4ba2452d085e3\") " pod="kube-system/kube-controller-manager-ip-172-31-22-249" Jan 16 18:04:31.974051 kubelet[3045]: I0116 18:04:31.973499 3045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be23bef16f9d7ff5eda4ba2452d085e3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-249\" (UID: \"be23bef16f9d7ff5eda4ba2452d085e3\") " pod="kube-system/kube-controller-manager-ip-172-31-22-249" Jan 16 18:04:31.974051 kubelet[3045]: I0116 18:04:31.973535 3045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be23bef16f9d7ff5eda4ba2452d085e3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-249\" (UID: \"be23bef16f9d7ff5eda4ba2452d085e3\") " pod="kube-system/kube-controller-manager-ip-172-31-22-249" Jan 16 18:04:31.974051 kubelet[3045]: I0116 18:04:31.973571 3045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3849975482addc5f9bd23e0ab28ec184-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-249\" (UID: \"3849975482addc5f9bd23e0ab28ec184\") " pod="kube-system/kube-scheduler-ip-172-31-22-249" Jan 16 18:04:32.107436 kubelet[3045]: I0116 18:04:32.107393 3045 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-249" Jan 16 18:04:32.107921 kubelet[3045]: E0116 18:04:32.107877 3045 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.249:6443/api/v1/nodes\": dial tcp 172.31.22.249:6443: connect: connection refused" node="ip-172-31-22-249" Jan 16 18:04:32.152789 containerd[1954]: time="2026-01-16T18:04:32.152721238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-249,Uid:fbb1bcdf27c3060f56af4c17a57fd017,Namespace:kube-system,Attempt:0,}" Jan 16 18:04:32.162755 containerd[1954]: time="2026-01-16T18:04:32.162637210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-249,Uid:be23bef16f9d7ff5eda4ba2452d085e3,Namespace:kube-system,Attempt:0,}" Jan 16 18:04:32.173204 containerd[1954]: time="2026-01-16T18:04:32.173111400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-249,Uid:3849975482addc5f9bd23e0ab28ec184,Namespace:kube-system,Attempt:0,}" Jan 16 18:04:32.217582 containerd[1954]: time="2026-01-16T18:04:32.217399481Z" level=info msg="connecting to shim 595da0a8466b7079d6cb2755ac12c47929e04b8c412bae1894dd1ab260bdc0c8" address="unix:///run/containerd/s/9c96f7d4c2905af3bf745023b38ba903115ac6c4b5891acfc5938c53c7eb5698" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:04:32.265327 containerd[1954]: time="2026-01-16T18:04:32.264629309Z" level=info msg="connecting to shim 0cf68e6ecfab145105983c9a62b949c9b291bbf3c6730a73f4e7efbe5f4a182b" address="unix:///run/containerd/s/6b9dbb0c925572450377404f15b1ff69b8f207b4ed3ee457093752bc272dd730" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:04:32.279165 kubelet[3045]: E0116 18:04:32.279047 3045 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-249?timeout=10s\": dial tcp 172.31.22.249:6443: connect: connection refused" interval="800ms" Jan 16 18:04:32.287562 containerd[1954]: time="2026-01-16T18:04:32.287489345Z" level=info msg="connecting to shim 7bc4297459c02ee725d6e752d422514e9c81c486f352e48d8d71da824e2b8493" address="unix:///run/containerd/s/ec21f3bb03eefe9435b21c0645969036483c31d39d4a7d37317cf1b9525a4032" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:04:32.308754 kubelet[3045]: E0116 18:04:32.308521 3045 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.249:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.249:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-249.188b48351dc36715 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-249,UID:ip-172-31-22-249,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-249,},FirstTimestamp:2026-01-16 18:04:31.642863381 +0000 UTC m=+2.433640806,LastTimestamp:2026-01-16 18:04:31.642863381 +0000 UTC m=+2.433640806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-249,}" Jan 16 18:04:32.327470 systemd[1]: Started cri-containerd-595da0a8466b7079d6cb2755ac12c47929e04b8c412bae1894dd1ab260bdc0c8.scope - libcontainer container 595da0a8466b7079d6cb2755ac12c47929e04b8c412bae1894dd1ab260bdc0c8. Jan 16 18:04:32.358558 systemd[1]: Started cri-containerd-0cf68e6ecfab145105983c9a62b949c9b291bbf3c6730a73f4e7efbe5f4a182b.scope - libcontainer container 0cf68e6ecfab145105983c9a62b949c9b291bbf3c6730a73f4e7efbe5f4a182b. Jan 16 18:04:32.373748 systemd[1]: Started cri-containerd-7bc4297459c02ee725d6e752d422514e9c81c486f352e48d8d71da824e2b8493.scope - libcontainer container 7bc4297459c02ee725d6e752d422514e9c81c486f352e48d8d71da824e2b8493. Jan 16 18:04:32.378000 audit: BPF prog-id=86 op=LOAD Jan 16 18:04:32.386000 audit: BPF prog-id=87 op=LOAD Jan 16 18:04:32.386000 audit[3101]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3087 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539356461306138343636623730373964366362323735356163313263 Jan 16 18:04:32.386000 audit: BPF prog-id=87 op=UNLOAD Jan 16 18:04:32.386000 audit[3101]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539356461306138343636623730373964366362323735356163313263 Jan 16 18:04:32.386000 audit: BPF prog-id=88 op=LOAD Jan 16 18:04:32.386000 audit[3101]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3087 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539356461306138343636623730373964366362323735356163313263 Jan 16 18:04:32.387000 audit: BPF prog-id=89 op=LOAD Jan 16 18:04:32.387000 audit[3101]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3087 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539356461306138343636623730373964366362323735356163313263 Jan 16 18:04:32.387000 audit: BPF prog-id=89 op=UNLOAD Jan 16 18:04:32.387000 audit[3101]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539356461306138343636623730373964366362323735356163313263 Jan 16 18:04:32.387000 audit: BPF prog-id=88 op=UNLOAD Jan 16 18:04:32.387000 audit[3101]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539356461306138343636623730373964366362323735356163313263 Jan 16 18:04:32.388000 audit: BPF prog-id=90 op=LOAD Jan 16 18:04:32.388000 audit[3101]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3087 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539356461306138343636623730373964366362323735356163313263 Jan 16 18:04:32.392000 audit: BPF prog-id=91 op=LOAD Jan 16 18:04:32.395000 audit: BPF prog-id=92 op=LOAD Jan 16 18:04:32.395000 audit[3139]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400020c180 a2=98 a3=0 items=0 ppid=3106 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663638653665636661623134353130353938336339613632623934 Jan 16 18:04:32.395000 audit: BPF prog-id=92 op=UNLOAD Jan 16 18:04:32.395000 audit[3139]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3106 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663638653665636661623134353130353938336339613632623934 Jan 16 18:04:32.395000 audit: BPF prog-id=93 op=LOAD Jan 16 18:04:32.395000 audit[3139]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400020c3e8 a2=98 a3=0 items=0 ppid=3106 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663638653665636661623134353130353938336339613632623934 Jan 16 18:04:32.395000 audit: BPF prog-id=94 op=LOAD Jan 16 18:04:32.395000 audit[3139]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400020c168 a2=98 a3=0 items=0 ppid=3106 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663638653665636661623134353130353938336339613632623934 Jan 16 18:04:32.396000 audit: BPF prog-id=94 op=UNLOAD Jan 16 18:04:32.396000 audit[3139]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3106 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663638653665636661623134353130353938336339613632623934 Jan 16 18:04:32.396000 audit: BPF prog-id=93 op=UNLOAD Jan 16 18:04:32.396000 audit[3139]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3106 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663638653665636661623134353130353938336339613632623934 Jan 16 18:04:32.396000 audit: BPF prog-id=95 op=LOAD Jan 16 18:04:32.396000 audit[3139]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400020c648 a2=98 a3=0 items=0 ppid=3106 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663638653665636661623134353130353938336339613632623934 Jan 16 18:04:32.426000 audit: BPF prog-id=96 op=LOAD Jan 16 18:04:32.428000 audit: BPF prog-id=97 op=LOAD Jan 16 18:04:32.428000 audit[3155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3135 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762633432393734353963303265653732356436653735326434323235 Jan 16 18:04:32.429000 audit: BPF prog-id=97 op=UNLOAD Jan 16 18:04:32.429000 audit[3155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762633432393734353963303265653732356436653735326434323235 Jan 16 18:04:32.429000 audit: BPF prog-id=98 op=LOAD Jan 16 18:04:32.429000 audit[3155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3135 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762633432393734353963303265653732356436653735326434323235 Jan 16 18:04:32.429000 audit: BPF prog-id=99 op=LOAD Jan 16 18:04:32.429000 audit[3155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3135 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762633432393734353963303265653732356436653735326434323235 Jan 16 18:04:32.429000 audit: BPF prog-id=99 op=UNLOAD Jan 16 18:04:32.429000 audit[3155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762633432393734353963303265653732356436653735326434323235 Jan 16 18:04:32.429000 audit: BPF prog-id=98 op=UNLOAD Jan 16 18:04:32.429000 audit[3155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762633432393734353963303265653732356436653735326434323235 Jan 16 18:04:32.429000 audit: BPF prog-id=100 op=LOAD Jan 16 18:04:32.429000 audit[3155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3135 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762633432393734353963303265653732356436653735326434323235 Jan 16 18:04:32.495147 containerd[1954]: time="2026-01-16T18:04:32.495070109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-249,Uid:fbb1bcdf27c3060f56af4c17a57fd017,Namespace:kube-system,Attempt:0,} returns sandbox id \"595da0a8466b7079d6cb2755ac12c47929e04b8c412bae1894dd1ab260bdc0c8\"" Jan 16 18:04:32.507985 containerd[1954]: time="2026-01-16T18:04:32.507763105Z" level=info msg="CreateContainer within sandbox \"595da0a8466b7079d6cb2755ac12c47929e04b8c412bae1894dd1ab260bdc0c8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 18:04:32.511322 containerd[1954]: time="2026-01-16T18:04:32.511253621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-249,Uid:be23bef16f9d7ff5eda4ba2452d085e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cf68e6ecfab145105983c9a62b949c9b291bbf3c6730a73f4e7efbe5f4a182b\"" Jan 16 18:04:32.513342 kubelet[3045]: I0116 18:04:32.513288 3045 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-249" Jan 16 18:04:32.513817 kubelet[3045]: E0116 18:04:32.513769 3045 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.249:6443/api/v1/nodes\": dial tcp 172.31.22.249:6443: connect: connection refused" node="ip-172-31-22-249" Jan 16 18:04:32.528140 containerd[1954]: time="2026-01-16T18:04:32.527861293Z" level=info msg="CreateContainer within sandbox \"0cf68e6ecfab145105983c9a62b949c9b291bbf3c6730a73f4e7efbe5f4a182b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 18:04:32.530060 containerd[1954]: time="2026-01-16T18:04:32.530010883Z" level=info msg="Container 179b369c9aeb527df786cd73b9fcdcfb2372338a1ab8e9427e192e2d7bfa766d: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:04:32.537974 containerd[1954]: time="2026-01-16T18:04:32.537851750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-249,Uid:3849975482addc5f9bd23e0ab28ec184,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bc4297459c02ee725d6e752d422514e9c81c486f352e48d8d71da824e2b8493\"" Jan 16 18:04:32.541110 kubelet[3045]: W0116 18:04:32.541030 3045 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.249:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.249:6443: connect: connection refused Jan 16 18:04:32.541270 kubelet[3045]: E0116 18:04:32.541160 3045 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.249:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.249:6443: connect: connection refused" logger="UnhandledError" Jan 16 18:04:32.543277 containerd[1954]: time="2026-01-16T18:04:32.543068549Z" level=info msg="CreateContainer within sandbox \"7bc4297459c02ee725d6e752d422514e9c81c486f352e48d8d71da824e2b8493\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 18:04:32.556367 containerd[1954]: time="2026-01-16T18:04:32.556309498Z" level=info msg="CreateContainer within sandbox \"595da0a8466b7079d6cb2755ac12c47929e04b8c412bae1894dd1ab260bdc0c8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"179b369c9aeb527df786cd73b9fcdcfb2372338a1ab8e9427e192e2d7bfa766d\"" Jan 16 18:04:32.562024 containerd[1954]: time="2026-01-16T18:04:32.561924260Z" level=info msg="StartContainer for \"179b369c9aeb527df786cd73b9fcdcfb2372338a1ab8e9427e192e2d7bfa766d\"" Jan 16 18:04:32.564500 containerd[1954]: time="2026-01-16T18:04:32.564447729Z" level=info msg="Container 1f406ebd7fc3f7623765978e044de838cb45e23f2a0b1e70e54ba4a71c921342: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:04:32.566954 containerd[1954]: time="2026-01-16T18:04:32.566903881Z" level=info msg="connecting to shim 179b369c9aeb527df786cd73b9fcdcfb2372338a1ab8e9427e192e2d7bfa766d" address="unix:///run/containerd/s/9c96f7d4c2905af3bf745023b38ba903115ac6c4b5891acfc5938c53c7eb5698" protocol=ttrpc version=3 Jan 16 18:04:32.581749 containerd[1954]: time="2026-01-16T18:04:32.581661968Z" level=info msg="CreateContainer within sandbox \"0cf68e6ecfab145105983c9a62b949c9b291bbf3c6730a73f4e7efbe5f4a182b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1f406ebd7fc3f7623765978e044de838cb45e23f2a0b1e70e54ba4a71c921342\"" Jan 16 18:04:32.584176 containerd[1954]: time="2026-01-16T18:04:32.583463637Z" level=info msg="StartContainer for \"1f406ebd7fc3f7623765978e044de838cb45e23f2a0b1e70e54ba4a71c921342\"" Jan 16 18:04:32.584656 containerd[1954]: time="2026-01-16T18:04:32.584602154Z" level=info msg="Container 7d7114f73586481f9cd8e40feffe9cbad3b034778a89dd322699c0d137a069f0: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:04:32.591455 containerd[1954]: time="2026-01-16T18:04:32.591380165Z" level=info msg="connecting to shim 1f406ebd7fc3f7623765978e044de838cb45e23f2a0b1e70e54ba4a71c921342" address="unix:///run/containerd/s/6b9dbb0c925572450377404f15b1ff69b8f207b4ed3ee457093752bc272dd730" protocol=ttrpc version=3 Jan 16 18:04:32.606475 kubelet[3045]: W0116 18:04:32.606385 3045 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.249:6443: connect: connection refused Jan 16 18:04:32.607300 kubelet[3045]: E0116 18:04:32.607110 3045 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.249:6443: connect: connection refused" logger="UnhandledError" Jan 16 18:04:32.612748 systemd[1]: Started cri-containerd-179b369c9aeb527df786cd73b9fcdcfb2372338a1ab8e9427e192e2d7bfa766d.scope - libcontainer container 179b369c9aeb527df786cd73b9fcdcfb2372338a1ab8e9427e192e2d7bfa766d. Jan 16 18:04:32.614612 containerd[1954]: time="2026-01-16T18:04:32.612101525Z" level=info msg="CreateContainer within sandbox \"7bc4297459c02ee725d6e752d422514e9c81c486f352e48d8d71da824e2b8493\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7d7114f73586481f9cd8e40feffe9cbad3b034778a89dd322699c0d137a069f0\"" Jan 16 18:04:32.616363 containerd[1954]: time="2026-01-16T18:04:32.616286228Z" level=info msg="StartContainer for \"7d7114f73586481f9cd8e40feffe9cbad3b034778a89dd322699c0d137a069f0\"" Jan 16 18:04:32.620864 containerd[1954]: time="2026-01-16T18:04:32.620743215Z" level=info msg="connecting to shim 7d7114f73586481f9cd8e40feffe9cbad3b034778a89dd322699c0d137a069f0" address="unix:///run/containerd/s/ec21f3bb03eefe9435b21c0645969036483c31d39d4a7d37317cf1b9525a4032" protocol=ttrpc version=3 Jan 16 18:04:32.651157 systemd[1]: Started cri-containerd-1f406ebd7fc3f7623765978e044de838cb45e23f2a0b1e70e54ba4a71c921342.scope - libcontainer container 1f406ebd7fc3f7623765978e044de838cb45e23f2a0b1e70e54ba4a71c921342. Jan 16 18:04:32.667000 audit: BPF prog-id=101 op=LOAD Jan 16 18:04:32.672000 audit: BPF prog-id=102 op=LOAD Jan 16 18:04:32.672000 audit[3219]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3087 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137396233363963396165623532376466373836636437336239666364 Jan 16 18:04:32.672000 audit: BPF prog-id=102 op=UNLOAD Jan 16 18:04:32.672000 audit[3219]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137396233363963396165623532376466373836636437336239666364 Jan 16 18:04:32.674000 audit: BPF prog-id=103 op=LOAD Jan 16 18:04:32.674000 audit[3219]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3087 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137396233363963396165623532376466373836636437336239666364 Jan 16 18:04:32.674000 audit: BPF prog-id=104 op=LOAD Jan 16 18:04:32.674000 audit[3219]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3087 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137396233363963396165623532376466373836636437336239666364 Jan 16 18:04:32.674000 audit: BPF prog-id=104 op=UNLOAD Jan 16 18:04:32.674000 audit[3219]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137396233363963396165623532376466373836636437336239666364 Jan 16 18:04:32.674000 audit: BPF prog-id=103 op=UNLOAD Jan 16 18:04:32.674000 audit[3219]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137396233363963396165623532376466373836636437336239666364 Jan 16 18:04:32.674000 audit: BPF prog-id=105 op=LOAD Jan 16 18:04:32.674000 audit[3219]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3087 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137396233363963396165623532376466373836636437336239666364 Jan 16 18:04:32.690493 systemd[1]: Started cri-containerd-7d7114f73586481f9cd8e40feffe9cbad3b034778a89dd322699c0d137a069f0.scope - libcontainer container 7d7114f73586481f9cd8e40feffe9cbad3b034778a89dd322699c0d137a069f0. Jan 16 18:04:32.708000 audit: BPF prog-id=106 op=LOAD Jan 16 18:04:32.713019 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 16 18:04:32.713367 kernel: audit: type=1334 audit(1768586672.708:381): prog-id=106 op=LOAD Jan 16 18:04:32.712000 audit: BPF prog-id=107 op=LOAD Jan 16 18:04:32.718058 kernel: audit: type=1334 audit(1768586672.712:382): prog-id=107 op=LOAD Jan 16 18:04:32.712000 audit[3232]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3106 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.726745 kernel: audit: type=1300 audit(1768586672.712:382): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3106 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.735693 kernel: audit: type=1327 audit(1768586672.712:382): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166343036656264376663336637363233373635393738653034346465 Jan 16 18:04:32.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166343036656264376663336637363233373635393738653034346465 Jan 16 18:04:32.743607 kernel: audit: type=1334 audit(1768586672.712:383): prog-id=107 op=UNLOAD Jan 16 18:04:32.712000 audit: BPF prog-id=107 op=UNLOAD Jan 16 18:04:32.712000 audit[3232]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3106 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.757738 kernel: audit: type=1300 audit(1768586672.712:383): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3106 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.757869 kernel: audit: type=1327 audit(1768586672.712:383): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166343036656264376663336637363233373635393738653034346465 Jan 16 18:04:32.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166343036656264376663336637363233373635393738653034346465 Jan 16 18:04:32.764258 kernel: audit: type=1334 audit(1768586672.712:384): prog-id=108 op=LOAD Jan 16 18:04:32.712000 audit: BPF prog-id=108 op=LOAD Jan 16 18:04:32.771900 kernel: audit: type=1300 audit(1768586672.712:384): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3106 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.712000 audit[3232]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3106 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.783817 kernel: audit: type=1327 audit(1768586672.712:384): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166343036656264376663336637363233373635393738653034346465 Jan 16 18:04:32.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166343036656264376663336637363233373635393738653034346465 Jan 16 18:04:32.712000 audit: BPF prog-id=109 op=LOAD Jan 16 18:04:32.712000 audit[3232]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3106 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166343036656264376663336637363233373635393738653034346465 Jan 16 18:04:32.713000 audit: BPF prog-id=109 op=UNLOAD Jan 16 18:04:32.713000 audit[3232]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3106 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166343036656264376663336637363233373635393738653034346465 Jan 16 18:04:32.713000 audit: BPF prog-id=108 op=UNLOAD Jan 16 18:04:32.713000 audit[3232]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3106 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166343036656264376663336637363233373635393738653034346465 Jan 16 18:04:32.713000 audit: BPF prog-id=110 op=LOAD Jan 16 18:04:32.713000 audit[3232]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3106 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166343036656264376663336637363233373635393738653034346465 Jan 16 18:04:32.756000 audit: BPF prog-id=111 op=LOAD Jan 16 18:04:32.758000 audit: BPF prog-id=112 op=LOAD Jan 16 18:04:32.758000 audit[3250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3135 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764373131346637333538363438316639636438653430666566666539 Jan 16 18:04:32.758000 audit: BPF prog-id=112 op=UNLOAD Jan 16 18:04:32.758000 audit[3250]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764373131346637333538363438316639636438653430666566666539 Jan 16 18:04:32.758000 audit: BPF prog-id=113 op=LOAD Jan 16 18:04:32.758000 audit[3250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3135 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764373131346637333538363438316639636438653430666566666539 Jan 16 18:04:32.763000 audit: BPF prog-id=114 op=LOAD Jan 16 18:04:32.763000 audit[3250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3135 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764373131346637333538363438316639636438653430666566666539 Jan 16 18:04:32.764000 audit: BPF prog-id=114 op=UNLOAD Jan 16 18:04:32.764000 audit[3250]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764373131346637333538363438316639636438653430666566666539 Jan 16 18:04:32.771000 audit: BPF prog-id=113 op=UNLOAD Jan 16 18:04:32.771000 audit[3250]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764373131346637333538363438316639636438653430666566666539 Jan 16 18:04:32.771000 audit: BPF prog-id=115 op=LOAD Jan 16 18:04:32.771000 audit[3250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3135 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:32.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764373131346637333538363438316639636438653430666566666539 Jan 16 18:04:32.879405 containerd[1954]: time="2026-01-16T18:04:32.877366365Z" level=info msg="StartContainer for \"1f406ebd7fc3f7623765978e044de838cb45e23f2a0b1e70e54ba4a71c921342\" returns successfully" Jan 16 18:04:32.879405 containerd[1954]: time="2026-01-16T18:04:32.877638312Z" level=info msg="StartContainer for \"179b369c9aeb527df786cd73b9fcdcfb2372338a1ab8e9427e192e2d7bfa766d\" returns successfully" Jan 16 18:04:32.918754 containerd[1954]: time="2026-01-16T18:04:32.918580910Z" level=info msg="StartContainer for \"7d7114f73586481f9cd8e40feffe9cbad3b034778a89dd322699c0d137a069f0\" returns successfully" Jan 16 18:04:33.317688 kubelet[3045]: I0116 18:04:33.317529 3045 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-249" Jan 16 18:04:33.810425 kubelet[3045]: E0116 18:04:33.810367 3045 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-249\" not found" node="ip-172-31-22-249" Jan 16 18:04:33.812679 kubelet[3045]: E0116 18:04:33.811407 3045 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-249\" not found" node="ip-172-31-22-249" Jan 16 18:04:33.820080 kubelet[3045]: E0116 18:04:33.817836 3045 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-249\" not found" node="ip-172-31-22-249" Jan 16 18:04:34.822787 kubelet[3045]: E0116 18:04:34.822732 3045 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-249\" not found" node="ip-172-31-22-249" Jan 16 18:04:34.823353 kubelet[3045]: E0116 18:04:34.823309 3045 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-249\" not found" node="ip-172-31-22-249" Jan 16 18:04:34.827256 kubelet[3045]: E0116 18:04:34.827194 3045 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-249\" not found" node="ip-172-31-22-249" Jan 16 18:04:35.808997 kubelet[3045]: E0116 18:04:35.808934 3045 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-249\" not found" node="ip-172-31-22-249" Jan 16 18:04:35.823144 kubelet[3045]: E0116 18:04:35.823084 3045 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-249\" not found" node="ip-172-31-22-249" Jan 16 18:04:35.825104 kubelet[3045]: E0116 18:04:35.824947 3045 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-249\" not found" node="ip-172-31-22-249" Jan 16 18:04:35.834614 kubelet[3045]: I0116 18:04:35.834550 3045 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-249" Jan 16 18:04:35.870171 kubelet[3045]: I0116 18:04:35.870085 3045 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-249" Jan 16 18:04:36.127346 kubelet[3045]: E0116 18:04:36.127290 3045 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-249\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-22-249" Jan 16 18:04:36.127346 kubelet[3045]: I0116 18:04:36.127341 3045 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-249" Jan 16 18:04:36.137695 kubelet[3045]: E0116 18:04:36.137643 3045 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-22-249\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-22-249" Jan 16 18:04:36.137695 kubelet[3045]: I0116 18:04:36.137686 3045 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-249" Jan 16 18:04:36.149136 kubelet[3045]: E0116 18:04:36.149063 3045 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-249\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-22-249" Jan 16 18:04:36.640921 kubelet[3045]: I0116 18:04:36.640837 3045 apiserver.go:52] "Watching apiserver" Jan 16 18:04:36.673239 kubelet[3045]: I0116 18:04:36.673180 3045 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 18:04:38.135976 kubelet[3045]: I0116 18:04:38.135713 3045 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-249" Jan 16 18:04:38.176949 systemd[1]: Reload requested from client PID 3321 ('systemctl') (unit session-10.scope)... Jan 16 18:04:38.176981 systemd[1]: Reloading... Jan 16 18:04:38.400180 zram_generator::config[3377]: No configuration found. Jan 16 18:04:38.731774 kubelet[3045]: I0116 18:04:38.731729 3045 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-249" Jan 16 18:04:38.899688 systemd[1]: Reloading finished in 722 ms. Jan 16 18:04:38.967236 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:04:38.984530 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 18:04:38.985384 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:04:38.985626 systemd[1]: kubelet.service: Consumed 3.185s CPU time, 129.6M memory peak. Jan 16 18:04:38.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:38.988304 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 16 18:04:38.988393 kernel: audit: type=1131 audit(1768586678.984:397): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:38.991586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:04:38.993000 audit: BPF prog-id=116 op=LOAD Jan 16 18:04:39.003726 kernel: audit: type=1334 audit(1768586678.993:398): prog-id=116 op=LOAD Jan 16 18:04:39.003838 kernel: audit: type=1334 audit(1768586678.993:399): prog-id=85 op=UNLOAD Jan 16 18:04:39.003896 kernel: audit: type=1334 audit(1768586678.997:400): prog-id=117 op=LOAD Jan 16 18:04:39.003942 kernel: audit: type=1334 audit(1768586678.997:401): prog-id=66 op=UNLOAD Jan 16 18:04:39.003986 kernel: audit: type=1334 audit(1768586678.999:402): prog-id=118 op=LOAD Jan 16 18:04:39.004026 kernel: audit: type=1334 audit(1768586678.999:403): prog-id=67 op=UNLOAD Jan 16 18:04:38.993000 audit: BPF prog-id=85 op=UNLOAD Jan 16 18:04:38.997000 audit: BPF prog-id=117 op=LOAD Jan 16 18:04:38.997000 audit: BPF prog-id=66 op=UNLOAD Jan 16 18:04:38.999000 audit: BPF prog-id=118 op=LOAD Jan 16 18:04:38.999000 audit: BPF prog-id=67 op=UNLOAD Jan 16 18:04:39.007139 kernel: audit: type=1334 audit(1768586679.004:404): prog-id=119 op=LOAD Jan 16 18:04:39.007238 kernel: audit: type=1334 audit(1768586679.006:405): prog-id=120 op=LOAD Jan 16 18:04:39.004000 audit: BPF prog-id=119 op=LOAD Jan 16 18:04:39.006000 audit: BPF prog-id=120 op=LOAD Jan 16 18:04:39.010183 kernel: audit: type=1334 audit(1768586679.006:406): prog-id=68 op=UNLOAD Jan 16 18:04:39.006000 audit: BPF prog-id=68 op=UNLOAD Jan 16 18:04:39.006000 audit: BPF prog-id=69 op=UNLOAD Jan 16 18:04:39.010000 audit: BPF prog-id=121 op=LOAD Jan 16 18:04:39.010000 audit: BPF prog-id=75 op=UNLOAD Jan 16 18:04:39.016000 audit: BPF prog-id=122 op=LOAD Jan 16 18:04:39.016000 audit: BPF prog-id=72 op=UNLOAD Jan 16 18:04:39.017000 audit: BPF prog-id=123 op=LOAD Jan 16 18:04:39.017000 audit: BPF prog-id=124 op=LOAD Jan 16 18:04:39.017000 audit: BPF prog-id=73 op=UNLOAD Jan 16 18:04:39.017000 audit: BPF prog-id=74 op=UNLOAD Jan 16 18:04:39.018000 audit: BPF prog-id=125 op=LOAD Jan 16 18:04:39.028000 audit: BPF prog-id=76 op=UNLOAD Jan 16 18:04:39.028000 audit: BPF prog-id=126 op=LOAD Jan 16 18:04:39.028000 audit: BPF prog-id=127 op=LOAD Jan 16 18:04:39.028000 audit: BPF prog-id=77 op=UNLOAD Jan 16 18:04:39.029000 audit: BPF prog-id=78 op=UNLOAD Jan 16 18:04:39.032000 audit: BPF prog-id=128 op=LOAD Jan 16 18:04:39.032000 audit: BPF prog-id=82 op=UNLOAD Jan 16 18:04:39.032000 audit: BPF prog-id=129 op=LOAD Jan 16 18:04:39.032000 audit: BPF prog-id=130 op=LOAD Jan 16 18:04:39.032000 audit: BPF prog-id=83 op=UNLOAD Jan 16 18:04:39.033000 audit: BPF prog-id=84 op=UNLOAD Jan 16 18:04:39.034000 audit: BPF prog-id=131 op=LOAD Jan 16 18:04:39.034000 audit: BPF prog-id=79 op=UNLOAD Jan 16 18:04:39.034000 audit: BPF prog-id=132 op=LOAD Jan 16 18:04:39.034000 audit: BPF prog-id=133 op=LOAD Jan 16 18:04:39.034000 audit: BPF prog-id=80 op=UNLOAD Jan 16 18:04:39.034000 audit: BPF prog-id=81 op=UNLOAD Jan 16 18:04:39.035000 audit: BPF prog-id=134 op=LOAD Jan 16 18:04:39.035000 audit: BPF prog-id=135 op=LOAD Jan 16 18:04:39.035000 audit: BPF prog-id=70 op=UNLOAD Jan 16 18:04:39.035000 audit: BPF prog-id=71 op=UNLOAD Jan 16 18:04:39.417204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:04:39.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:39.433695 (kubelet)[3428]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 18:04:39.543198 kubelet[3428]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 18:04:39.545184 kubelet[3428]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 18:04:39.545184 kubelet[3428]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 18:04:39.545184 kubelet[3428]: I0116 18:04:39.543933 3428 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 18:04:39.558010 kubelet[3428]: I0116 18:04:39.557966 3428 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 16 18:04:39.558372 kubelet[3428]: I0116 18:04:39.558346 3428 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 18:04:39.558976 kubelet[3428]: I0116 18:04:39.558943 3428 server.go:954] "Client rotation is on, will bootstrap in background" Jan 16 18:04:39.562947 kubelet[3428]: I0116 18:04:39.562911 3428 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 16 18:04:39.567790 kubelet[3428]: I0116 18:04:39.567749 3428 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 18:04:39.578499 kubelet[3428]: I0116 18:04:39.578466 3428 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 16 18:04:39.584922 kubelet[3428]: I0116 18:04:39.584846 3428 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 18:04:39.586175 kubelet[3428]: I0116 18:04:39.585968 3428 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 18:04:39.586787 kubelet[3428]: I0116 18:04:39.586045 3428 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-249","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 18:04:39.586787 kubelet[3428]: I0116 18:04:39.586372 3428 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 18:04:39.586787 kubelet[3428]: I0116 18:04:39.586393 3428 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 18:04:39.586787 kubelet[3428]: I0116 18:04:39.586476 3428 state_mem.go:36] "Initialized new in-memory state store" Jan 16 18:04:39.586787 kubelet[3428]: I0116 18:04:39.586725 3428 kubelet.go:446] "Attempting to sync node with API server" Jan 16 18:04:39.588996 kubelet[3428]: I0116 18:04:39.586749 3428 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 18:04:39.588996 kubelet[3428]: I0116 18:04:39.587430 3428 kubelet.go:352] "Adding apiserver pod source" Jan 16 18:04:39.588996 kubelet[3428]: I0116 18:04:39.587461 3428 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 18:04:39.598131 kubelet[3428]: I0116 18:04:39.598051 3428 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 16 18:04:39.599301 kubelet[3428]: I0116 18:04:39.599244 3428 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 18:04:39.604565 kubelet[3428]: I0116 18:04:39.604490 3428 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 18:04:39.604733 kubelet[3428]: I0116 18:04:39.604591 3428 server.go:1287] "Started kubelet" Jan 16 18:04:39.617740 kubelet[3428]: I0116 18:04:39.617676 3428 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 18:04:39.650341 kubelet[3428]: I0116 18:04:39.649207 3428 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 18:04:39.652526 kubelet[3428]: I0116 18:04:39.652463 3428 server.go:479] "Adding debug handlers to kubelet server" Jan 16 18:04:39.660149 kubelet[3428]: I0116 18:04:39.659230 3428 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 18:04:39.661548 kubelet[3428]: I0116 18:04:39.661410 3428 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 18:04:39.661941 kubelet[3428]: I0116 18:04:39.661887 3428 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 18:04:39.677222 kubelet[3428]: I0116 18:04:39.676007 3428 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 18:04:39.678529 kubelet[3428]: E0116 18:04:39.677672 3428 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-249\" not found" Jan 16 18:04:39.689900 kubelet[3428]: I0116 18:04:39.689857 3428 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 18:04:39.694028 kubelet[3428]: I0116 18:04:39.693971 3428 reconciler.go:26] "Reconciler: start to sync state" Jan 16 18:04:39.705064 kubelet[3428]: I0116 18:04:39.703312 3428 factory.go:221] Registration of the systemd container factory successfully Jan 16 18:04:39.705064 kubelet[3428]: I0116 18:04:39.703489 3428 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 18:04:39.712651 kubelet[3428]: I0116 18:04:39.711835 3428 factory.go:221] Registration of the containerd container factory successfully Jan 16 18:04:39.718148 kubelet[3428]: E0116 18:04:39.718082 3428 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 18:04:39.718553 kubelet[3428]: I0116 18:04:39.718490 3428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 18:04:39.733182 kubelet[3428]: I0116 18:04:39.732767 3428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 18:04:39.733182 kubelet[3428]: I0116 18:04:39.732828 3428 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 16 18:04:39.733182 kubelet[3428]: I0116 18:04:39.732859 3428 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 18:04:39.733182 kubelet[3428]: I0116 18:04:39.732873 3428 kubelet.go:2382] "Starting kubelet main sync loop" Jan 16 18:04:39.733182 kubelet[3428]: E0116 18:04:39.732939 3428 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 18:04:39.835219 kubelet[3428]: E0116 18:04:39.835150 3428 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 16 18:04:39.863319 kubelet[3428]: I0116 18:04:39.863268 3428 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 18:04:39.863319 kubelet[3428]: I0116 18:04:39.863307 3428 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 18:04:39.863521 kubelet[3428]: I0116 18:04:39.863346 3428 state_mem.go:36] "Initialized new in-memory state store" Jan 16 18:04:39.863782 kubelet[3428]: I0116 18:04:39.863629 3428 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 18:04:39.863782 kubelet[3428]: I0116 18:04:39.863661 3428 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 18:04:39.863782 kubelet[3428]: I0116 18:04:39.863698 3428 policy_none.go:49] "None policy: Start" Jan 16 18:04:39.863782 kubelet[3428]: I0116 18:04:39.863716 3428 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 18:04:39.863782 kubelet[3428]: I0116 18:04:39.863738 3428 state_mem.go:35] "Initializing new in-memory state store" Jan 16 18:04:39.864027 kubelet[3428]: I0116 18:04:39.863922 3428 state_mem.go:75] "Updated machine memory state" Jan 16 18:04:39.882658 kubelet[3428]: I0116 18:04:39.882534 3428 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 18:04:39.883905 kubelet[3428]: I0116 18:04:39.882881 3428 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 18:04:39.883905 kubelet[3428]: I0116 18:04:39.882914 3428 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 18:04:39.883905 kubelet[3428]: I0116 18:04:39.883321 3428 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 18:04:39.889170 kubelet[3428]: E0116 18:04:39.886420 3428 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 18:04:40.011654 kubelet[3428]: I0116 18:04:40.011542 3428 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-249" Jan 16 18:04:40.028623 kubelet[3428]: I0116 18:04:40.028493 3428 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-22-249" Jan 16 18:04:40.030957 kubelet[3428]: I0116 18:04:40.029475 3428 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-249" Jan 16 18:04:40.036343 kubelet[3428]: I0116 18:04:40.036292 3428 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-249" Jan 16 18:04:40.037174 kubelet[3428]: I0116 18:04:40.036904 3428 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-249" Jan 16 18:04:40.039686 kubelet[3428]: I0116 18:04:40.039614 3428 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-249" Jan 16 18:04:40.060565 kubelet[3428]: E0116 18:04:40.060452 3428 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-249\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-249" Jan 16 18:04:40.065881 kubelet[3428]: E0116 18:04:40.065561 3428 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-249\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-249" Jan 16 18:04:40.097967 kubelet[3428]: I0116 18:04:40.097899 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbb1bcdf27c3060f56af4c17a57fd017-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-249\" (UID: \"fbb1bcdf27c3060f56af4c17a57fd017\") " pod="kube-system/kube-apiserver-ip-172-31-22-249" Jan 16 18:04:40.097967 kubelet[3428]: I0116 18:04:40.097973 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/be23bef16f9d7ff5eda4ba2452d085e3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-249\" (UID: \"be23bef16f9d7ff5eda4ba2452d085e3\") " pod="kube-system/kube-controller-manager-ip-172-31-22-249" Jan 16 18:04:40.098224 kubelet[3428]: I0116 18:04:40.098041 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be23bef16f9d7ff5eda4ba2452d085e3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-249\" (UID: \"be23bef16f9d7ff5eda4ba2452d085e3\") " pod="kube-system/kube-controller-manager-ip-172-31-22-249" Jan 16 18:04:40.098224 kubelet[3428]: I0116 18:04:40.098083 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be23bef16f9d7ff5eda4ba2452d085e3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-249\" (UID: \"be23bef16f9d7ff5eda4ba2452d085e3\") " pod="kube-system/kube-controller-manager-ip-172-31-22-249" Jan 16 18:04:40.098413 kubelet[3428]: I0116 18:04:40.098375 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbb1bcdf27c3060f56af4c17a57fd017-ca-certs\") pod \"kube-apiserver-ip-172-31-22-249\" (UID: \"fbb1bcdf27c3060f56af4c17a57fd017\") " pod="kube-system/kube-apiserver-ip-172-31-22-249" Jan 16 18:04:40.098545 kubelet[3428]: I0116 18:04:40.098487 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbb1bcdf27c3060f56af4c17a57fd017-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-249\" (UID: \"fbb1bcdf27c3060f56af4c17a57fd017\") " pod="kube-system/kube-apiserver-ip-172-31-22-249" Jan 16 18:04:40.098636 kubelet[3428]: I0116 18:04:40.098563 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be23bef16f9d7ff5eda4ba2452d085e3-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-249\" (UID: \"be23bef16f9d7ff5eda4ba2452d085e3\") " pod="kube-system/kube-controller-manager-ip-172-31-22-249" Jan 16 18:04:40.098719 kubelet[3428]: I0116 18:04:40.098645 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be23bef16f9d7ff5eda4ba2452d085e3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-249\" (UID: \"be23bef16f9d7ff5eda4ba2452d085e3\") " pod="kube-system/kube-controller-manager-ip-172-31-22-249" Jan 16 18:04:40.098808 kubelet[3428]: I0116 18:04:40.098740 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3849975482addc5f9bd23e0ab28ec184-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-249\" (UID: \"3849975482addc5f9bd23e0ab28ec184\") " pod="kube-system/kube-scheduler-ip-172-31-22-249" Jan 16 18:04:40.589718 kubelet[3428]: I0116 18:04:40.589649 3428 apiserver.go:52] "Watching apiserver" Jan 16 18:04:40.690393 kubelet[3428]: I0116 18:04:40.690343 3428 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 18:04:40.781135 kubelet[3428]: I0116 18:04:40.781075 3428 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-249" Jan 16 18:04:40.781450 kubelet[3428]: I0116 18:04:40.781331 3428 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-249" Jan 16 18:04:40.795111 kubelet[3428]: E0116 18:04:40.793526 3428 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-249\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-249" Jan 16 18:04:40.805363 kubelet[3428]: E0116 18:04:40.805312 3428 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-249\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-249" Jan 16 18:04:40.833931 kubelet[3428]: I0116 18:04:40.833782 3428 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-249" podStartSLOduration=2.833762454 podStartE2EDuration="2.833762454s" podCreationTimestamp="2026-01-16 18:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:04:40.833522538 +0000 UTC m=+1.391221434" watchObservedRunningTime="2026-01-16 18:04:40.833762454 +0000 UTC m=+1.391461338" Jan 16 18:04:40.895553 kubelet[3428]: I0116 18:04:40.895090 3428 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-249" podStartSLOduration=2.895039518 podStartE2EDuration="2.895039518s" podCreationTimestamp="2026-01-16 18:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:04:40.860652652 +0000 UTC m=+1.418351560" watchObservedRunningTime="2026-01-16 18:04:40.895039518 +0000 UTC m=+1.452738414" Jan 16 18:04:40.895553 kubelet[3428]: I0116 18:04:40.895276 3428 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-249" podStartSLOduration=0.895266599 podStartE2EDuration="895.266599ms" podCreationTimestamp="2026-01-16 18:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:04:40.893015714 +0000 UTC m=+1.450714622" watchObservedRunningTime="2026-01-16 18:04:40.895266599 +0000 UTC m=+1.452965507" Jan 16 18:04:43.743891 kubelet[3428]: I0116 18:04:43.743845 3428 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 18:04:43.744948 kubelet[3428]: I0116 18:04:43.744731 3428 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 18:04:43.745578 containerd[1954]: time="2026-01-16T18:04:43.744349584Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 18:04:44.746852 systemd[1]: Created slice kubepods-besteffort-poda5d7b6c7_cb8a_4945_afa7_75901a38d878.slice - libcontainer container kubepods-besteffort-poda5d7b6c7_cb8a_4945_afa7_75901a38d878.slice. Jan 16 18:04:44.826851 kubelet[3428]: I0116 18:04:44.826726 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5d7b6c7-cb8a-4945-afa7-75901a38d878-lib-modules\") pod \"kube-proxy-kgxt5\" (UID: \"a5d7b6c7-cb8a-4945-afa7-75901a38d878\") " pod="kube-system/kube-proxy-kgxt5" Jan 16 18:04:44.826851 kubelet[3428]: I0116 18:04:44.826808 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a5d7b6c7-cb8a-4945-afa7-75901a38d878-kube-proxy\") pod \"kube-proxy-kgxt5\" (UID: \"a5d7b6c7-cb8a-4945-afa7-75901a38d878\") " pod="kube-system/kube-proxy-kgxt5" Jan 16 18:04:44.826851 kubelet[3428]: I0116 18:04:44.826846 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5d7b6c7-cb8a-4945-afa7-75901a38d878-xtables-lock\") pod \"kube-proxy-kgxt5\" (UID: \"a5d7b6c7-cb8a-4945-afa7-75901a38d878\") " pod="kube-system/kube-proxy-kgxt5" Jan 16 18:04:44.827543 kubelet[3428]: I0116 18:04:44.826943 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgf77\" (UniqueName: \"kubernetes.io/projected/a5d7b6c7-cb8a-4945-afa7-75901a38d878-kube-api-access-xgf77\") pod \"kube-proxy-kgxt5\" (UID: \"a5d7b6c7-cb8a-4945-afa7-75901a38d878\") " pod="kube-system/kube-proxy-kgxt5" Jan 16 18:04:44.897331 systemd[1]: Created slice kubepods-besteffort-podb251acbc_6988_4386_bfc5_6450d151332a.slice - libcontainer container kubepods-besteffort-podb251acbc_6988_4386_bfc5_6450d151332a.slice. Jan 16 18:04:44.928076 kubelet[3428]: I0116 18:04:44.928021 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b251acbc-6988-4386-bfc5-6450d151332a-var-lib-calico\") pod \"tigera-operator-7dcd859c48-wncwf\" (UID: \"b251acbc-6988-4386-bfc5-6450d151332a\") " pod="tigera-operator/tigera-operator-7dcd859c48-wncwf" Jan 16 18:04:44.929144 kubelet[3428]: I0116 18:04:44.928317 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sckv7\" (UniqueName: \"kubernetes.io/projected/b251acbc-6988-4386-bfc5-6450d151332a-kube-api-access-sckv7\") pod \"tigera-operator-7dcd859c48-wncwf\" (UID: \"b251acbc-6988-4386-bfc5-6450d151332a\") " pod="tigera-operator/tigera-operator-7dcd859c48-wncwf" Jan 16 18:04:45.060889 containerd[1954]: time="2026-01-16T18:04:45.060380320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kgxt5,Uid:a5d7b6c7-cb8a-4945-afa7-75901a38d878,Namespace:kube-system,Attempt:0,}" Jan 16 18:04:45.106321 containerd[1954]: time="2026-01-16T18:04:45.106243733Z" level=info msg="connecting to shim aa78f78a02e2c88a81daedc8af298022d44fa4f7c60b6b9d15d63751eb02d697" address="unix:///run/containerd/s/e5316b4cab4ec493ce48d9fd9cfcafa8bef0863caaa38159fc87620f9612350b" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:04:45.164445 systemd[1]: Started cri-containerd-aa78f78a02e2c88a81daedc8af298022d44fa4f7c60b6b9d15d63751eb02d697.scope - libcontainer container aa78f78a02e2c88a81daedc8af298022d44fa4f7c60b6b9d15d63751eb02d697. Jan 16 18:04:45.188416 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 16 18:04:45.188563 kernel: audit: type=1334 audit(1768586685.185:439): prog-id=136 op=LOAD Jan 16 18:04:45.185000 audit: BPF prog-id=136 op=LOAD Jan 16 18:04:45.188000 audit: BPF prog-id=137 op=LOAD Jan 16 18:04:45.191341 kernel: audit: type=1334 audit(1768586685.188:440): prog-id=137 op=LOAD Jan 16 18:04:45.191409 kernel: audit: type=1300 audit(1768586685.188:440): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=3485 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.188000 audit[3497]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=3485 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.197845 kernel: audit: type=1327 audit(1768586685.188:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373866373861303265326338386138316461656463386166323938 Jan 16 18:04:45.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373866373861303265326338386138316461656463386166323938 Jan 16 18:04:45.203872 kernel: audit: type=1334 audit(1768586685.188:441): prog-id=137 op=UNLOAD Jan 16 18:04:45.188000 audit: BPF prog-id=137 op=UNLOAD Jan 16 18:04:45.211152 kernel: audit: type=1300 audit(1768586685.188:441): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3485 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.211220 kernel: audit: type=1327 audit(1768586685.188:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373866373861303265326338386138316461656463386166323938 Jan 16 18:04:45.188000 audit[3497]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3485 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373866373861303265326338386138316461656463386166323938 Jan 16 18:04:45.217771 containerd[1954]: time="2026-01-16T18:04:45.217716802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-wncwf,Uid:b251acbc-6988-4386-bfc5-6450d151332a,Namespace:tigera-operator,Attempt:0,}" Jan 16 18:04:45.188000 audit: BPF prog-id=138 op=LOAD Jan 16 18:04:45.220635 kernel: audit: type=1334 audit(1768586685.188:442): prog-id=138 op=LOAD Jan 16 18:04:45.220852 kernel: audit: type=1300 audit(1768586685.188:442): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=3485 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.188000 audit[3497]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=3485 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373866373861303265326338386138316461656463386166323938 Jan 16 18:04:45.262581 kernel: audit: type=1327 audit(1768586685.188:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373866373861303265326338386138316461656463386166323938 Jan 16 18:04:45.196000 audit: BPF prog-id=139 op=LOAD Jan 16 18:04:45.196000 audit[3497]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=3485 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373866373861303265326338386138316461656463386166323938 Jan 16 18:04:45.210000 audit: BPF prog-id=139 op=UNLOAD Jan 16 18:04:45.210000 audit[3497]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3485 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373866373861303265326338386138316461656463386166323938 Jan 16 18:04:45.210000 audit: BPF prog-id=138 op=UNLOAD Jan 16 18:04:45.210000 audit[3497]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3485 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373866373861303265326338386138316461656463386166323938 Jan 16 18:04:45.210000 audit: BPF prog-id=140 op=LOAD Jan 16 18:04:45.210000 audit[3497]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=3485 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373866373861303265326338386138316461656463386166323938 Jan 16 18:04:45.307653 containerd[1954]: time="2026-01-16T18:04:45.304910294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kgxt5,Uid:a5d7b6c7-cb8a-4945-afa7-75901a38d878,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa78f78a02e2c88a81daedc8af298022d44fa4f7c60b6b9d15d63751eb02d697\"" Jan 16 18:04:45.321862 containerd[1954]: time="2026-01-16T18:04:45.321812089Z" level=info msg="CreateContainer within sandbox \"aa78f78a02e2c88a81daedc8af298022d44fa4f7c60b6b9d15d63751eb02d697\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 18:04:45.326172 containerd[1954]: time="2026-01-16T18:04:45.326053292Z" level=info msg="connecting to shim 694afc46986c4405d73f8c529f08a1df4f5242ca7bbebfd83c86f9943f892e85" address="unix:///run/containerd/s/3d208ab3ac49fa4a75a1ca7781353a447ff5fa2700a09a2a532624efc82c2dbb" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:04:45.345702 containerd[1954]: time="2026-01-16T18:04:45.345648753Z" level=info msg="Container 24d580434c4718dd3c03f1a88ebeb1c9d532234b207a95faccd9004307e89668: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:04:45.369610 containerd[1954]: time="2026-01-16T18:04:45.369343651Z" level=info msg="CreateContainer within sandbox \"aa78f78a02e2c88a81daedc8af298022d44fa4f7c60b6b9d15d63751eb02d697\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"24d580434c4718dd3c03f1a88ebeb1c9d532234b207a95faccd9004307e89668\"" Jan 16 18:04:45.369524 systemd[1]: Started cri-containerd-694afc46986c4405d73f8c529f08a1df4f5242ca7bbebfd83c86f9943f892e85.scope - libcontainer container 694afc46986c4405d73f8c529f08a1df4f5242ca7bbebfd83c86f9943f892e85. Jan 16 18:04:45.374021 containerd[1954]: time="2026-01-16T18:04:45.373966201Z" level=info msg="StartContainer for \"24d580434c4718dd3c03f1a88ebeb1c9d532234b207a95faccd9004307e89668\"" Jan 16 18:04:45.383052 containerd[1954]: time="2026-01-16T18:04:45.382979898Z" level=info msg="connecting to shim 24d580434c4718dd3c03f1a88ebeb1c9d532234b207a95faccd9004307e89668" address="unix:///run/containerd/s/e5316b4cab4ec493ce48d9fd9cfcafa8bef0863caaa38159fc87620f9612350b" protocol=ttrpc version=3 Jan 16 18:04:45.414000 audit: BPF prog-id=141 op=LOAD Jan 16 18:04:45.416000 audit: BPF prog-id=142 op=LOAD Jan 16 18:04:45.416000 audit[3542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3530 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639346166633436393836633434303564373366386335323966303861 Jan 16 18:04:45.416000 audit: BPF prog-id=142 op=UNLOAD Jan 16 18:04:45.416000 audit[3542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3530 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639346166633436393836633434303564373366386335323966303861 Jan 16 18:04:45.416000 audit: BPF prog-id=143 op=LOAD Jan 16 18:04:45.416000 audit[3542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3530 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639346166633436393836633434303564373366386335323966303861 Jan 16 18:04:45.416000 audit: BPF prog-id=144 op=LOAD Jan 16 18:04:45.416000 audit[3542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3530 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639346166633436393836633434303564373366386335323966303861 Jan 16 18:04:45.417000 audit: BPF prog-id=144 op=UNLOAD Jan 16 18:04:45.417000 audit[3542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3530 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639346166633436393836633434303564373366386335323966303861 Jan 16 18:04:45.417000 audit: BPF prog-id=143 op=UNLOAD Jan 16 18:04:45.417000 audit[3542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3530 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639346166633436393836633434303564373366386335323966303861 Jan 16 18:04:45.417000 audit: BPF prog-id=145 op=LOAD Jan 16 18:04:45.417000 audit[3542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3530 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639346166633436393836633434303564373366386335323966303861 Jan 16 18:04:45.430512 systemd[1]: Started cri-containerd-24d580434c4718dd3c03f1a88ebeb1c9d532234b207a95faccd9004307e89668.scope - libcontainer container 24d580434c4718dd3c03f1a88ebeb1c9d532234b207a95faccd9004307e89668. Jan 16 18:04:45.496151 containerd[1954]: time="2026-01-16T18:04:45.496027062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-wncwf,Uid:b251acbc-6988-4386-bfc5-6450d151332a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"694afc46986c4405d73f8c529f08a1df4f5242ca7bbebfd83c86f9943f892e85\"" Jan 16 18:04:45.502415 containerd[1954]: time="2026-01-16T18:04:45.502356265Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 16 18:04:45.531000 audit: BPF prog-id=146 op=LOAD Jan 16 18:04:45.531000 audit[3555]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3485 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643538303433346334373138646433633033663161383865626562 Jan 16 18:04:45.532000 audit: BPF prog-id=147 op=LOAD Jan 16 18:04:45.532000 audit[3555]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3485 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643538303433346334373138646433633033663161383865626562 Jan 16 18:04:45.532000 audit: BPF prog-id=147 op=UNLOAD Jan 16 18:04:45.532000 audit[3555]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3485 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643538303433346334373138646433633033663161383865626562 Jan 16 18:04:45.532000 audit: BPF prog-id=146 op=UNLOAD Jan 16 18:04:45.532000 audit[3555]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3485 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643538303433346334373138646433633033663161383865626562 Jan 16 18:04:45.532000 audit: BPF prog-id=148 op=LOAD Jan 16 18:04:45.532000 audit[3555]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3485 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643538303433346334373138646433633033663161383865626562 Jan 16 18:04:45.582883 containerd[1954]: time="2026-01-16T18:04:45.582703731Z" level=info msg="StartContainer for \"24d580434c4718dd3c03f1a88ebeb1c9d532234b207a95faccd9004307e89668\" returns successfully" Jan 16 18:04:45.849000 audit[3628]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3628 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:45.849000 audit[3628]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeeab0fa0 a2=0 a3=1 items=0 ppid=3574 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.849000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 16 18:04:45.851000 audit[3627]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3627 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:45.851000 audit[3627]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc6a27c0 a2=0 a3=1 items=0 ppid=3574 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.851000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 16 18:04:45.851000 audit[3630]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3630 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:45.851000 audit[3630]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe138e7a0 a2=0 a3=1 items=0 ppid=3574 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.851000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 16 18:04:45.854000 audit[3631]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=3631 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:45.854000 audit[3631]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd9da7a20 a2=0 a3=1 items=0 ppid=3574 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.854000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 16 18:04:45.856000 audit[3632]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3632 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:45.856000 audit[3632]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4bd5530 a2=0 a3=1 items=0 ppid=3574 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.856000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 16 18:04:45.866000 audit[3633]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3633 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:45.866000 audit[3633]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee006740 a2=0 a3=1 items=0 ppid=3574 pid=3633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.866000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 16 18:04:45.959000 audit[3634]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3634 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:45.959000 audit[3634]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd42cb450 a2=0 a3=1 items=0 ppid=3574 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.959000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 16 18:04:45.965000 audit[3636]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:45.965000 audit[3636]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffed7506f0 a2=0 a3=1 items=0 ppid=3574 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.965000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 16 18:04:45.973000 audit[3639]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3639 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:45.973000 audit[3639]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcf54bf30 a2=0 a3=1 items=0 ppid=3574 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.973000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 16 18:04:45.975000 audit[3640]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3640 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:45.975000 audit[3640]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe503970 a2=0 a3=1 items=0 ppid=3574 pid=3640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.975000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 16 18:04:45.982000 audit[3642]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3642 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:45.982000 audit[3642]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc892ec80 a2=0 a3=1 items=0 ppid=3574 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.982000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 16 18:04:45.985000 audit[3643]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3643 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:45.985000 audit[3643]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1906b70 a2=0 a3=1 items=0 ppid=3574 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.985000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 16 18:04:45.991000 audit[3645]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3645 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:45.991000 audit[3645]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffec212be0 a2=0 a3=1 items=0 ppid=3574 pid=3645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.991000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 16 18:04:45.999000 audit[3648]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:45.999000 audit[3648]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffec9ee660 a2=0 a3=1 items=0 ppid=3574 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:45.999000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 16 18:04:46.001000 audit[3649]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3649 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:46.001000 audit[3649]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd15ecdc0 a2=0 a3=1 items=0 ppid=3574 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.001000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 16 18:04:46.007000 audit[3651]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:46.007000 audit[3651]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff34e4bc0 a2=0 a3=1 items=0 ppid=3574 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.007000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 16 18:04:46.010000 audit[3652]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3652 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:46.010000 audit[3652]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff0d0de60 a2=0 a3=1 items=0 ppid=3574 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.010000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 16 18:04:46.018000 audit[3654]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3654 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:46.018000 audit[3654]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffebac0df0 a2=0 a3=1 items=0 ppid=3574 pid=3654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.018000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 18:04:46.026000 audit[3657]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3657 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:46.026000 audit[3657]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffee8e9960 a2=0 a3=1 items=0 ppid=3574 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.026000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 18:04:46.034000 audit[3660]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3660 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:46.034000 audit[3660]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd20caec0 a2=0 a3=1 items=0 ppid=3574 pid=3660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.034000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 16 18:04:46.036000 audit[3661]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3661 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:46.036000 audit[3661]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdfec4070 a2=0 a3=1 items=0 ppid=3574 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.036000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 16 18:04:46.042000 audit[3663]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3663 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:46.042000 audit[3663]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe4c0c7a0 a2=0 a3=1 items=0 ppid=3574 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.042000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 18:04:46.053000 audit[3666]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3666 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:46.053000 audit[3666]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff2193170 a2=0 a3=1 items=0 ppid=3574 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.053000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 18:04:46.056000 audit[3667]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3667 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:46.056000 audit[3667]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcbf3ad60 a2=0 a3=1 items=0 ppid=3574 pid=3667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.056000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 16 18:04:46.061000 audit[3669]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:04:46.061000 audit[3669]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffd9bd60b0 a2=0 a3=1 items=0 ppid=3574 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.061000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 16 18:04:46.099000 audit[3675]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3675 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:46.099000 audit[3675]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc7146540 a2=0 a3=1 items=0 ppid=3574 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.099000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:46.111000 audit[3675]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3675 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:46.111000 audit[3675]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffc7146540 a2=0 a3=1 items=0 ppid=3574 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.111000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:46.121000 audit[3680]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3680 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.121000 audit[3680]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe4d73910 a2=0 a3=1 items=0 ppid=3574 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.121000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 16 18:04:46.127000 audit[3682]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3682 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.127000 audit[3682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc6ccb220 a2=0 a3=1 items=0 ppid=3574 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.127000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 16 18:04:46.135000 audit[3685]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3685 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.135000 audit[3685]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffeae58b10 a2=0 a3=1 items=0 ppid=3574 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.135000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 16 18:04:46.138000 audit[3686]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3686 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.138000 audit[3686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda231b50 a2=0 a3=1 items=0 ppid=3574 pid=3686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.138000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 16 18:04:46.144000 audit[3688]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3688 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.144000 audit[3688]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff85b27f0 a2=0 a3=1 items=0 ppid=3574 pid=3688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.144000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 16 18:04:46.146000 audit[3689]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3689 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.146000 audit[3689]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4153800 a2=0 a3=1 items=0 ppid=3574 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.146000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 16 18:04:46.153000 audit[3691]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3691 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.153000 audit[3691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffef44f0a0 a2=0 a3=1 items=0 ppid=3574 pid=3691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.153000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 16 18:04:46.162000 audit[3694]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3694 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.162000 audit[3694]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe6df7a40 a2=0 a3=1 items=0 ppid=3574 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.162000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 16 18:04:46.165000 audit[3695]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3695 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.165000 audit[3695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4e6a500 a2=0 a3=1 items=0 ppid=3574 pid=3695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.165000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 16 18:04:46.171000 audit[3697]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3697 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.171000 audit[3697]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc45fc310 a2=0 a3=1 items=0 ppid=3574 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.171000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 16 18:04:46.174000 audit[3698]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3698 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.174000 audit[3698]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffde12e730 a2=0 a3=1 items=0 ppid=3574 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.174000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 16 18:04:46.180000 audit[3700]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3700 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.180000 audit[3700]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc9210e20 a2=0 a3=1 items=0 ppid=3574 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.180000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 18:04:46.188000 audit[3703]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3703 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.188000 audit[3703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc780a740 a2=0 a3=1 items=0 ppid=3574 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 16 18:04:46.196000 audit[3706]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3706 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.196000 audit[3706]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffaaf72e0 a2=0 a3=1 items=0 ppid=3574 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.196000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 16 18:04:46.200000 audit[3707]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3707 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.200000 audit[3707]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe9e75dd0 a2=0 a3=1 items=0 ppid=3574 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.200000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 16 18:04:46.205000 audit[3709]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3709 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.205000 audit[3709]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe7567950 a2=0 a3=1 items=0 ppid=3574 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.205000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 18:04:46.213000 audit[3712]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3712 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.213000 audit[3712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffeb971c70 a2=0 a3=1 items=0 ppid=3574 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.213000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 18:04:46.216000 audit[3713]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3713 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.216000 audit[3713]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1b92930 a2=0 a3=1 items=0 ppid=3574 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.216000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 16 18:04:46.224000 audit[3715]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3715 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.224000 audit[3715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc96ad980 a2=0 a3=1 items=0 ppid=3574 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.224000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 16 18:04:46.229000 audit[3716]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3716 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.229000 audit[3716]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc8ab8bc0 a2=0 a3=1 items=0 ppid=3574 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.229000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 16 18:04:46.237000 audit[3718]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3718 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.237000 audit[3718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd355ba00 a2=0 a3=1 items=0 ppid=3574 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.237000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 18:04:46.246000 audit[3721]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3721 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:04:46.246000 audit[3721]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff4e272f0 a2=0 a3=1 items=0 ppid=3574 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.246000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 18:04:46.253000 audit[3723]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3723 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 16 18:04:46.253000 audit[3723]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd7614a40 a2=0 a3=1 items=0 ppid=3574 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.253000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:46.254000 audit[3723]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3723 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 16 18:04:46.254000 audit[3723]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd7614a40 a2=0 a3=1 items=0 ppid=3574 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:46.254000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:47.833090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075901951.mount: Deactivated successfully. Jan 16 18:04:48.561457 containerd[1954]: time="2026-01-16T18:04:48.560739441Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:48.565475 containerd[1954]: time="2026-01-16T18:04:48.565030398Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 16 18:04:48.571932 containerd[1954]: time="2026-01-16T18:04:48.569315639Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:48.572197 kubelet[3428]: I0116 18:04:48.571443 3428 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kgxt5" podStartSLOduration=4.571417374 podStartE2EDuration="4.571417374s" podCreationTimestamp="2026-01-16 18:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:04:45.826322521 +0000 UTC m=+6.384021429" watchObservedRunningTime="2026-01-16 18:04:48.571417374 +0000 UTC m=+9.129116258" Jan 16 18:04:48.582239 containerd[1954]: time="2026-01-16T18:04:48.582166286Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:04:48.583665 containerd[1954]: time="2026-01-16T18:04:48.583606790Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.080970329s" Jan 16 18:04:48.583783 containerd[1954]: time="2026-01-16T18:04:48.583663854Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 16 18:04:48.589997 containerd[1954]: time="2026-01-16T18:04:48.589949595Z" level=info msg="CreateContainer within sandbox \"694afc46986c4405d73f8c529f08a1df4f5242ca7bbebfd83c86f9943f892e85\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 16 18:04:48.606440 containerd[1954]: time="2026-01-16T18:04:48.606371126Z" level=info msg="Container a4ff43560cfe6e6219d9bcd7f5b8db0b3a54fa69b29fc706f5f46ff96880e7d1: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:04:48.624546 containerd[1954]: time="2026-01-16T18:04:48.624463652Z" level=info msg="CreateContainer within sandbox \"694afc46986c4405d73f8c529f08a1df4f5242ca7bbebfd83c86f9943f892e85\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a4ff43560cfe6e6219d9bcd7f5b8db0b3a54fa69b29fc706f5f46ff96880e7d1\"" Jan 16 18:04:48.627293 containerd[1954]: time="2026-01-16T18:04:48.626794305Z" level=info msg="StartContainer for \"a4ff43560cfe6e6219d9bcd7f5b8db0b3a54fa69b29fc706f5f46ff96880e7d1\"" Jan 16 18:04:48.630819 containerd[1954]: time="2026-01-16T18:04:48.630748169Z" level=info msg="connecting to shim a4ff43560cfe6e6219d9bcd7f5b8db0b3a54fa69b29fc706f5f46ff96880e7d1" address="unix:///run/containerd/s/3d208ab3ac49fa4a75a1ca7781353a447ff5fa2700a09a2a532624efc82c2dbb" protocol=ttrpc version=3 Jan 16 18:04:48.675487 systemd[1]: Started cri-containerd-a4ff43560cfe6e6219d9bcd7f5b8db0b3a54fa69b29fc706f5f46ff96880e7d1.scope - libcontainer container a4ff43560cfe6e6219d9bcd7f5b8db0b3a54fa69b29fc706f5f46ff96880e7d1. Jan 16 18:04:48.698000 audit: BPF prog-id=149 op=LOAD Jan 16 18:04:48.699000 audit: BPF prog-id=150 op=LOAD Jan 16 18:04:48.699000 audit[3734]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3530 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:48.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134666634333536306366653665363231396439626364376635623864 Jan 16 18:04:48.700000 audit: BPF prog-id=150 op=UNLOAD Jan 16 18:04:48.700000 audit[3734]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3530 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:48.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134666634333536306366653665363231396439626364376635623864 Jan 16 18:04:48.700000 audit: BPF prog-id=151 op=LOAD Jan 16 18:04:48.700000 audit[3734]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3530 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:48.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134666634333536306366653665363231396439626364376635623864 Jan 16 18:04:48.700000 audit: BPF prog-id=152 op=LOAD Jan 16 18:04:48.700000 audit[3734]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3530 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:48.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134666634333536306366653665363231396439626364376635623864 Jan 16 18:04:48.700000 audit: BPF prog-id=152 op=UNLOAD Jan 16 18:04:48.700000 audit[3734]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3530 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:48.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134666634333536306366653665363231396439626364376635623864 Jan 16 18:04:48.700000 audit: BPF prog-id=151 op=UNLOAD Jan 16 18:04:48.700000 audit[3734]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3530 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:48.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134666634333536306366653665363231396439626364376635623864 Jan 16 18:04:48.700000 audit: BPF prog-id=153 op=LOAD Jan 16 18:04:48.700000 audit[3734]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3530 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:48.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134666634333536306366653665363231396439626364376635623864 Jan 16 18:04:48.739892 containerd[1954]: time="2026-01-16T18:04:48.739822259Z" level=info msg="StartContainer for \"a4ff43560cfe6e6219d9bcd7f5b8db0b3a54fa69b29fc706f5f46ff96880e7d1\" returns successfully" Jan 16 18:04:49.755549 kubelet[3428]: I0116 18:04:49.755452 3428 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-wncwf" podStartSLOduration=2.6694609849999997 podStartE2EDuration="5.75542868s" podCreationTimestamp="2026-01-16 18:04:44 +0000 UTC" firstStartedPulling="2026-01-16 18:04:45.499866388 +0000 UTC m=+6.057565260" lastFinishedPulling="2026-01-16 18:04:48.585834083 +0000 UTC m=+9.143532955" observedRunningTime="2026-01-16 18:04:48.871721533 +0000 UTC m=+9.429420417" watchObservedRunningTime="2026-01-16 18:04:49.75542868 +0000 UTC m=+10.313127552" Jan 16 18:04:55.623315 sudo[2362]: pam_unix(sudo:session): session closed for user root Jan 16 18:04:55.622000 audit[2362]: USER_END pid=2362 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:04:55.625620 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 16 18:04:55.625701 kernel: audit: type=1106 audit(1768586695.622:519): pid=2362 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:04:55.622000 audit[2362]: CRED_DISP pid=2362 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:04:55.644444 kernel: audit: type=1104 audit(1768586695.622:520): pid=2362 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:04:55.716385 sshd[2361]: Connection closed by 4.153.228.146 port 38900 Jan 16 18:04:55.716784 sshd-session[2357]: pam_unix(sshd:session): session closed for user core Jan 16 18:04:55.721000 audit[2357]: USER_END pid=2357 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:04:55.729324 systemd[1]: sshd@8-172.31.22.249:22-4.153.228.146:38900.service: Deactivated successfully. Jan 16 18:04:55.721000 audit[2357]: CRED_DISP pid=2357 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:04:55.741383 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 18:04:55.741973 kernel: audit: type=1106 audit(1768586695.721:521): pid=2357 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:04:55.742054 kernel: audit: type=1104 audit(1768586695.721:522): pid=2357 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:04:55.743183 systemd[1]: session-10.scope: Consumed 11.338s CPU time, 219.6M memory peak. Jan 16 18:04:55.749048 systemd-logind[1935]: Session 10 logged out. Waiting for processes to exit. Jan 16 18:04:55.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.22.249:22-4.153.228.146:38900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:55.755281 kernel: audit: type=1131 audit(1768586695.728:523): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.22.249:22-4.153.228.146:38900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:55.755733 systemd-logind[1935]: Removed session 10. Jan 16 18:04:57.509000 audit[3813]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:57.509000 audit[3813]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd68eab00 a2=0 a3=1 items=0 ppid=3574 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:57.521905 kernel: audit: type=1325 audit(1768586697.509:524): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:57.522101 kernel: audit: type=1300 audit(1768586697.509:524): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd68eab00 a2=0 a3=1 items=0 ppid=3574 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:57.509000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:57.527745 kernel: audit: type=1327 audit(1768586697.509:524): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:57.527000 audit[3813]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:57.532444 kernel: audit: type=1325 audit(1768586697.527:525): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:57.527000 audit[3813]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd68eab00 a2=0 a3=1 items=0 ppid=3574 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:57.541163 kernel: audit: type=1300 audit(1768586697.527:525): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd68eab00 a2=0 a3=1 items=0 ppid=3574 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:57.527000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:57.601000 audit[3815]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3815 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:57.601000 audit[3815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffa1ca0e0 a2=0 a3=1 items=0 ppid=3574 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:57.601000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:57.607000 audit[3815]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3815 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:57.607000 audit[3815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffa1ca0e0 a2=0 a3=1 items=0 ppid=3574 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:57.607000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:07.751000 audit[3820]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3820 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:07.753873 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 16 18:05:07.753966 kernel: audit: type=1325 audit(1768586707.751:528): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3820 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:07.751000 audit[3820]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff5118b50 a2=0 a3=1 items=0 ppid=3574 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.765723 kernel: audit: type=1300 audit(1768586707.751:528): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff5118b50 a2=0 a3=1 items=0 ppid=3574 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.751000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:07.771407 kernel: audit: type=1327 audit(1768586707.751:528): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:07.772000 audit[3820]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3820 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:07.772000 audit[3820]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff5118b50 a2=0 a3=1 items=0 ppid=3574 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.785143 kernel: audit: type=1325 audit(1768586707.772:529): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3820 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:07.785293 kernel: audit: type=1300 audit(1768586707.772:529): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff5118b50 a2=0 a3=1 items=0 ppid=3574 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.772000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:07.790600 kernel: audit: type=1327 audit(1768586707.772:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:07.809000 audit[3822]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3822 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:07.809000 audit[3822]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe14175d0 a2=0 a3=1 items=0 ppid=3574 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.822080 kernel: audit: type=1325 audit(1768586707.809:530): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3822 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:07.822232 kernel: audit: type=1300 audit(1768586707.809:530): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe14175d0 a2=0 a3=1 items=0 ppid=3574 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.809000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:07.827506 kernel: audit: type=1327 audit(1768586707.809:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:07.830000 audit[3822]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3822 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:07.830000 audit[3822]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe14175d0 a2=0 a3=1 items=0 ppid=3574 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.830000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:07.836193 kernel: audit: type=1325 audit(1768586707.830:531): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3822 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:08.700000 audit[3824]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3824 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:08.700000 audit[3824]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdd40b1d0 a2=0 a3=1 items=0 ppid=3574 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:08.700000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:08.704000 audit[3824]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3824 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:08.704000 audit[3824]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdd40b1d0 a2=0 a3=1 items=0 ppid=3574 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:08.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:09.877000 audit[3826]: NETFILTER_CFG table=filter:115 family=2 entries=20 op=nft_register_rule pid=3826 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:09.877000 audit[3826]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd70f5400 a2=0 a3=1 items=0 ppid=3574 pid=3826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:09.877000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:09.884000 audit[3826]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3826 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:09.884000 audit[3826]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd70f5400 a2=0 a3=1 items=0 ppid=3574 pid=3826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:09.884000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:12.131000 audit[3831]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3831 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:12.131000 audit[3831]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc2046660 a2=0 a3=1 items=0 ppid=3574 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.131000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:12.140000 audit[3831]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3831 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:12.140000 audit[3831]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc2046660 a2=0 a3=1 items=0 ppid=3574 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.140000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:12.206949 systemd[1]: Created slice kubepods-besteffort-pod47fc5387_2c1a_4350_8af8_7d9c13b38ca5.slice - libcontainer container kubepods-besteffort-pod47fc5387_2c1a_4350_8af8_7d9c13b38ca5.slice. Jan 16 18:05:12.222720 kubelet[3428]: I0116 18:05:12.222412 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47fc5387-2c1a-4350-8af8-7d9c13b38ca5-tigera-ca-bundle\") pod \"calico-typha-77ffc79fcf-lgdpr\" (UID: \"47fc5387-2c1a-4350-8af8-7d9c13b38ca5\") " pod="calico-system/calico-typha-77ffc79fcf-lgdpr" Jan 16 18:05:12.222720 kubelet[3428]: I0116 18:05:12.222519 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/47fc5387-2c1a-4350-8af8-7d9c13b38ca5-typha-certs\") pod \"calico-typha-77ffc79fcf-lgdpr\" (UID: \"47fc5387-2c1a-4350-8af8-7d9c13b38ca5\") " pod="calico-system/calico-typha-77ffc79fcf-lgdpr" Jan 16 18:05:12.222720 kubelet[3428]: I0116 18:05:12.222570 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95sqx\" (UniqueName: \"kubernetes.io/projected/47fc5387-2c1a-4350-8af8-7d9c13b38ca5-kube-api-access-95sqx\") pod \"calico-typha-77ffc79fcf-lgdpr\" (UID: \"47fc5387-2c1a-4350-8af8-7d9c13b38ca5\") " pod="calico-system/calico-typha-77ffc79fcf-lgdpr" Jan 16 18:05:12.278000 audit[3833]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3833 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:12.278000 audit[3833]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffca3fd9b0 a2=0 a3=1 items=0 ppid=3574 pid=3833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.278000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:12.283000 audit[3833]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3833 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:12.283000 audit[3833]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffca3fd9b0 a2=0 a3=1 items=0 ppid=3574 pid=3833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.283000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:12.451849 systemd[1]: Created slice kubepods-besteffort-pod1fb45c3f_1b3a_4ab6_9f1c_39354b5fc20d.slice - libcontainer container kubepods-besteffort-pod1fb45c3f_1b3a_4ab6_9f1c_39354b5fc20d.slice. Jan 16 18:05:12.517496 containerd[1954]: time="2026-01-16T18:05:12.517422578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77ffc79fcf-lgdpr,Uid:47fc5387-2c1a-4350-8af8-7d9c13b38ca5,Namespace:calico-system,Attempt:0,}" Jan 16 18:05:12.525244 kubelet[3428]: I0116 18:05:12.524937 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d-flexvol-driver-host\") pod \"calico-node-8g8zx\" (UID: \"1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d\") " pod="calico-system/calico-node-8g8zx" Jan 16 18:05:12.525442 kubelet[3428]: I0116 18:05:12.525397 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d-cni-log-dir\") pod \"calico-node-8g8zx\" (UID: \"1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d\") " pod="calico-system/calico-node-8g8zx" Jan 16 18:05:12.525616 kubelet[3428]: I0116 18:05:12.525534 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d-cni-net-dir\") pod \"calico-node-8g8zx\" (UID: \"1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d\") " pod="calico-system/calico-node-8g8zx" Jan 16 18:05:12.525684 kubelet[3428]: I0116 18:05:12.525661 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d-lib-modules\") pod \"calico-node-8g8zx\" (UID: \"1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d\") " pod="calico-system/calico-node-8g8zx" Jan 16 18:05:12.525743 kubelet[3428]: I0116 18:05:12.525705 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d-policysync\") pod \"calico-node-8g8zx\" (UID: \"1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d\") " pod="calico-system/calico-node-8g8zx" Jan 16 18:05:12.525806 kubelet[3428]: I0116 18:05:12.525768 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d-var-lib-calico\") pod \"calico-node-8g8zx\" (UID: \"1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d\") " pod="calico-system/calico-node-8g8zx" Jan 16 18:05:12.525863 kubelet[3428]: I0116 18:05:12.525804 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d-var-run-calico\") pod \"calico-node-8g8zx\" (UID: \"1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d\") " pod="calico-system/calico-node-8g8zx" Jan 16 18:05:12.526139 kubelet[3428]: I0116 18:05:12.525931 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d-tigera-ca-bundle\") pod \"calico-node-8g8zx\" (UID: \"1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d\") " pod="calico-system/calico-node-8g8zx" Jan 16 18:05:12.526139 kubelet[3428]: I0116 18:05:12.526030 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d-cni-bin-dir\") pod \"calico-node-8g8zx\" (UID: \"1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d\") " pod="calico-system/calico-node-8g8zx" Jan 16 18:05:12.526280 kubelet[3428]: I0116 18:05:12.526180 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d-xtables-lock\") pod \"calico-node-8g8zx\" (UID: \"1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d\") " pod="calico-system/calico-node-8g8zx" Jan 16 18:05:12.527170 kubelet[3428]: I0116 18:05:12.526482 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d-node-certs\") pod \"calico-node-8g8zx\" (UID: \"1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d\") " pod="calico-system/calico-node-8g8zx" Jan 16 18:05:12.527170 kubelet[3428]: I0116 18:05:12.526606 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtf2d\" (UniqueName: \"kubernetes.io/projected/1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d-kube-api-access-qtf2d\") pod \"calico-node-8g8zx\" (UID: \"1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d\") " pod="calico-system/calico-node-8g8zx" Jan 16 18:05:12.604080 containerd[1954]: time="2026-01-16T18:05:12.603989918Z" level=info msg="connecting to shim fa313b52ddf2f506e910598c93ea4a4e7a72cc0888e60d44f668dbb599dd930a" address="unix:///run/containerd/s/af0bba6825e5b6c8d91f73bceace28fa4571ee6789f68e9447216711d189068d" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:05:12.610017 kubelet[3428]: E0116 18:05:12.609397 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88slq" podUID="74813863-8ca6-40d9-bd92-5b37511fc2e0" Jan 16 18:05:12.649997 kubelet[3428]: E0116 18:05:12.649959 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.650331 kubelet[3428]: W0116 18:05:12.650204 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.652854 kubelet[3428]: E0116 18:05:12.651210 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.653749 kubelet[3428]: E0116 18:05:12.653714 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.654000 kubelet[3428]: W0116 18:05:12.653882 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.654136 kubelet[3428]: E0116 18:05:12.654093 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.656700 kubelet[3428]: E0116 18:05:12.656622 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.656700 kubelet[3428]: W0116 18:05:12.656658 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.657137 kubelet[3428]: E0116 18:05:12.656955 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.657872 kubelet[3428]: E0116 18:05:12.657580 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.657872 kubelet[3428]: W0116 18:05:12.657609 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.657872 kubelet[3428]: E0116 18:05:12.657637 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.658932 kubelet[3428]: E0116 18:05:12.658562 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.660512 kubelet[3428]: W0116 18:05:12.660298 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.660512 kubelet[3428]: E0116 18:05:12.660355 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.671061 kubelet[3428]: E0116 18:05:12.671026 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.671349 kubelet[3428]: W0116 18:05:12.671245 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.671349 kubelet[3428]: E0116 18:05:12.671288 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.697532 kubelet[3428]: E0116 18:05:12.696555 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.698108 kubelet[3428]: W0116 18:05:12.698020 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.701528 kubelet[3428]: E0116 18:05:12.700994 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.704064 kubelet[3428]: E0116 18:05:12.703292 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.704064 kubelet[3428]: W0116 18:05:12.703326 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.704064 kubelet[3428]: E0116 18:05:12.703400 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.705309 kubelet[3428]: E0116 18:05:12.705259 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.705894 kubelet[3428]: W0116 18:05:12.705405 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.705894 kubelet[3428]: E0116 18:05:12.705440 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.709706 kubelet[3428]: E0116 18:05:12.709586 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.709706 kubelet[3428]: W0116 18:05:12.709618 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.709706 kubelet[3428]: E0116 18:05:12.709650 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.719574 kubelet[3428]: E0116 18:05:12.719305 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.719574 kubelet[3428]: W0116 18:05:12.719341 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.719574 kubelet[3428]: E0116 18:05:12.719375 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.723149 kubelet[3428]: E0116 18:05:12.721441 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.724764 kubelet[3428]: W0116 18:05:12.724717 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.724994 kubelet[3428]: E0116 18:05:12.724965 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.725539 kubelet[3428]: E0116 18:05:12.725510 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.725717 kubelet[3428]: W0116 18:05:12.725690 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.725824 kubelet[3428]: E0116 18:05:12.725801 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.726440 systemd[1]: Started cri-containerd-fa313b52ddf2f506e910598c93ea4a4e7a72cc0888e60d44f668dbb599dd930a.scope - libcontainer container fa313b52ddf2f506e910598c93ea4a4e7a72cc0888e60d44f668dbb599dd930a. Jan 16 18:05:12.727615 kubelet[3428]: E0116 18:05:12.727580 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.727769 kubelet[3428]: W0116 18:05:12.727742 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.730159 kubelet[3428]: E0116 18:05:12.729212 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.730418 kubelet[3428]: E0116 18:05:12.730390 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.730621 kubelet[3428]: W0116 18:05:12.730591 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.730821 kubelet[3428]: E0116 18:05:12.730780 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.732229 kubelet[3428]: E0116 18:05:12.732193 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.732571 kubelet[3428]: W0116 18:05:12.732384 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.732571 kubelet[3428]: E0116 18:05:12.732424 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.733770 kubelet[3428]: E0116 18:05:12.733544 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.733770 kubelet[3428]: W0116 18:05:12.733575 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.733770 kubelet[3428]: E0116 18:05:12.733606 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.737705 kubelet[3428]: E0116 18:05:12.735186 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.737705 kubelet[3428]: W0116 18:05:12.735219 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.737705 kubelet[3428]: E0116 18:05:12.735251 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.738307 kubelet[3428]: E0116 18:05:12.738275 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.738468 kubelet[3428]: W0116 18:05:12.738443 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.738589 kubelet[3428]: E0116 18:05:12.738565 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.739245 kubelet[3428]: E0116 18:05:12.739195 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.739430 kubelet[3428]: W0116 18:05:12.739403 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.739574 kubelet[3428]: E0116 18:05:12.739550 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.740524 kubelet[3428]: E0116 18:05:12.740373 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.740524 kubelet[3428]: W0116 18:05:12.740403 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.740750 kubelet[3428]: E0116 18:05:12.740720 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.741462 kubelet[3428]: E0116 18:05:12.741321 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.741462 kubelet[3428]: W0116 18:05:12.741350 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.741462 kubelet[3428]: E0116 18:05:12.741399 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.742325 kubelet[3428]: E0116 18:05:12.742218 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.742325 kubelet[3428]: W0116 18:05:12.742251 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.742325 kubelet[3428]: E0116 18:05:12.742280 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.743313 kubelet[3428]: E0116 18:05:12.743155 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.743313 kubelet[3428]: W0116 18:05:12.743186 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.743313 kubelet[3428]: E0116 18:05:12.743215 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.743889 kubelet[3428]: E0116 18:05:12.743854 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.744175 kubelet[3428]: W0116 18:05:12.744004 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.744175 kubelet[3428]: E0116 18:05:12.744040 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.744667 kubelet[3428]: E0116 18:05:12.744632 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.744929 kubelet[3428]: W0116 18:05:12.744796 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.744929 kubelet[3428]: E0116 18:05:12.744833 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.745530 kubelet[3428]: E0116 18:05:12.745426 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.745530 kubelet[3428]: W0116 18:05:12.745455 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.745530 kubelet[3428]: E0116 18:05:12.745482 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.746493 kubelet[3428]: E0116 18:05:12.746393 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.746493 kubelet[3428]: W0116 18:05:12.746426 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.746493 kubelet[3428]: E0116 18:05:12.746456 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.747178 kubelet[3428]: I0116 18:05:12.746803 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/74813863-8ca6-40d9-bd92-5b37511fc2e0-varrun\") pod \"csi-node-driver-88slq\" (UID: \"74813863-8ca6-40d9-bd92-5b37511fc2e0\") " pod="calico-system/csi-node-driver-88slq" Jan 16 18:05:12.747472 kubelet[3428]: E0116 18:05:12.747414 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.747472 kubelet[3428]: W0116 18:05:12.747440 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.747808 kubelet[3428]: E0116 18:05:12.747680 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.748310 kubelet[3428]: E0116 18:05:12.748242 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.748310 kubelet[3428]: W0116 18:05:12.748274 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.748628 kubelet[3428]: E0116 18:05:12.748501 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.748983 kubelet[3428]: I0116 18:05:12.748934 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74813863-8ca6-40d9-bd92-5b37511fc2e0-kubelet-dir\") pod \"csi-node-driver-88slq\" (UID: \"74813863-8ca6-40d9-bd92-5b37511fc2e0\") " pod="calico-system/csi-node-driver-88slq" Jan 16 18:05:12.749287 kubelet[3428]: E0116 18:05:12.749262 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.749474 kubelet[3428]: W0116 18:05:12.749405 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.749474 kubelet[3428]: E0116 18:05:12.749441 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.750104 kubelet[3428]: E0116 18:05:12.750038 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.750104 kubelet[3428]: W0116 18:05:12.750068 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.750489 kubelet[3428]: E0116 18:05:12.750367 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.750915 kubelet[3428]: E0116 18:05:12.750856 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.750915 kubelet[3428]: W0116 18:05:12.750884 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.751233 kubelet[3428]: E0116 18:05:12.751084 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.751375 kubelet[3428]: I0116 18:05:12.751349 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4p5l\" (UniqueName: \"kubernetes.io/projected/74813863-8ca6-40d9-bd92-5b37511fc2e0-kube-api-access-s4p5l\") pod \"csi-node-driver-88slq\" (UID: \"74813863-8ca6-40d9-bd92-5b37511fc2e0\") " pod="calico-system/csi-node-driver-88slq" Jan 16 18:05:12.751845 kubelet[3428]: E0116 18:05:12.751767 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.751845 kubelet[3428]: W0116 18:05:12.751791 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.751845 kubelet[3428]: E0116 18:05:12.751816 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.752578 kubelet[3428]: E0116 18:05:12.752520 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.752578 kubelet[3428]: W0116 18:05:12.752546 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.752906 kubelet[3428]: E0116 18:05:12.752781 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.753468 kubelet[3428]: E0116 18:05:12.753409 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.753468 kubelet[3428]: W0116 18:05:12.753435 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.753781 kubelet[3428]: E0116 18:05:12.753652 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.754293 kubelet[3428]: E0116 18:05:12.754200 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.754293 kubelet[3428]: W0116 18:05:12.754229 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.754293 kubelet[3428]: E0116 18:05:12.754256 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.754680 kubelet[3428]: I0116 18:05:12.754551 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/74813863-8ca6-40d9-bd92-5b37511fc2e0-registration-dir\") pod \"csi-node-driver-88slq\" (UID: \"74813863-8ca6-40d9-bd92-5b37511fc2e0\") " pod="calico-system/csi-node-driver-88slq" Jan 16 18:05:12.755142 kubelet[3428]: E0116 18:05:12.755091 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.755324 kubelet[3428]: W0116 18:05:12.755216 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.755589 kubelet[3428]: E0116 18:05:12.755453 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.756029 kubelet[3428]: E0116 18:05:12.755973 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.756029 kubelet[3428]: W0116 18:05:12.755998 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.756365 kubelet[3428]: E0116 18:05:12.756244 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.756856 kubelet[3428]: E0116 18:05:12.756760 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.756856 kubelet[3428]: W0116 18:05:12.756791 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.756856 kubelet[3428]: E0116 18:05:12.756820 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.757398 kubelet[3428]: I0116 18:05:12.757324 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/74813863-8ca6-40d9-bd92-5b37511fc2e0-socket-dir\") pod \"csi-node-driver-88slq\" (UID: \"74813863-8ca6-40d9-bd92-5b37511fc2e0\") " pod="calico-system/csi-node-driver-88slq" Jan 16 18:05:12.757983 kubelet[3428]: E0116 18:05:12.757838 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.757983 kubelet[3428]: W0116 18:05:12.757867 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.757983 kubelet[3428]: E0116 18:05:12.757897 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.758701 kubelet[3428]: E0116 18:05:12.758597 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.758701 kubelet[3428]: W0116 18:05:12.758626 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.758701 kubelet[3428]: E0116 18:05:12.758653 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.760816 containerd[1954]: time="2026-01-16T18:05:12.760763511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8g8zx,Uid:1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d,Namespace:calico-system,Attempt:0,}" Jan 16 18:05:12.807362 containerd[1954]: time="2026-01-16T18:05:12.807302799Z" level=info msg="connecting to shim 27e8dac55229e09e774e303070c9304127d14a5285d64087a9f514e361bd59d7" address="unix:///run/containerd/s/70494f1765970de2e9965b3efe6d8c870947b9ecfe91c1d62c7153ce1f247f09" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:05:12.822000 audit: BPF prog-id=154 op=LOAD Jan 16 18:05:12.825670 kernel: kauditd_printk_skb: 26 callbacks suppressed Jan 16 18:05:12.825780 kernel: audit: type=1334 audit(1768586712.822:540): prog-id=154 op=LOAD Jan 16 18:05:12.826000 audit: BPF prog-id=155 op=LOAD Jan 16 18:05:12.830415 kernel: audit: type=1334 audit(1768586712.826:541): prog-id=155 op=LOAD Jan 16 18:05:12.826000 audit[3858]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3845 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.838362 kernel: audit: type=1300 audit(1768586712.826:541): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3845 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333133623532646466326635303665393130353938633933656134 Jan 16 18:05:12.848070 kernel: audit: type=1327 audit(1768586712.826:541): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333133623532646466326635303665393130353938633933656134 Jan 16 18:05:12.827000 audit: BPF prog-id=155 op=UNLOAD Jan 16 18:05:12.850449 kernel: audit: type=1334 audit(1768586712.827:542): prog-id=155 op=UNLOAD Jan 16 18:05:12.827000 audit[3858]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3845 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.857013 kernel: audit: type=1300 audit(1768586712.827:542): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3845 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.857259 kernel: audit: type=1327 audit(1768586712.827:542): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333133623532646466326635303665393130353938633933656134 Jan 16 18:05:12.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333133623532646466326635303665393130353938633933656134 Jan 16 18:05:12.858957 kubelet[3428]: E0116 18:05:12.858924 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.859132 kubelet[3428]: W0116 18:05:12.859086 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.859244 kubelet[3428]: E0116 18:05:12.859221 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.859747 kubelet[3428]: E0116 18:05:12.859726 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.859897 kubelet[3428]: W0116 18:05:12.859857 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.860287 kubelet[3428]: E0116 18:05:12.860267 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.860543 kubelet[3428]: W0116 18:05:12.860379 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.860543 kubelet[3428]: E0116 18:05:12.860408 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.860821 kubelet[3428]: E0116 18:05:12.860741 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.861033 kubelet[3428]: E0116 18:05:12.860988 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.861033 kubelet[3428]: W0116 18:05:12.861007 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.861295 kubelet[3428]: E0116 18:05:12.861160 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.861687 kubelet[3428]: E0116 18:05:12.861641 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.861687 kubelet[3428]: W0116 18:05:12.861662 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.861940 kubelet[3428]: E0116 18:05:12.861827 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.862795 kubelet[3428]: E0116 18:05:12.862604 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.862795 kubelet[3428]: W0116 18:05:12.862637 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.862795 kubelet[3428]: E0116 18:05:12.862663 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.830000 audit: BPF prog-id=156 op=LOAD Jan 16 18:05:12.866974 kernel: audit: type=1334 audit(1768586712.830:543): prog-id=156 op=LOAD Jan 16 18:05:12.868221 kernel: audit: type=1300 audit(1768586712.830:543): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3845 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.830000 audit[3858]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3845 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.870157 kubelet[3428]: E0116 18:05:12.869778 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.870380 kubelet[3428]: W0116 18:05:12.870335 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333133623532646466326635303665393130353938633933656134 Jan 16 18:05:12.874759 kubelet[3428]: E0116 18:05:12.874731 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.876145 kubelet[3428]: W0116 18:05:12.874994 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.880622 kernel: audit: type=1327 audit(1768586712.830:543): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333133623532646466326635303665393130353938633933656134 Jan 16 18:05:12.839000 audit: BPF prog-id=157 op=LOAD Jan 16 18:05:12.839000 audit[3858]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3845 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.839000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333133623532646466326635303665393130353938633933656134 Jan 16 18:05:12.839000 audit: BPF prog-id=157 op=UNLOAD Jan 16 18:05:12.839000 audit[3858]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3845 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.839000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333133623532646466326635303665393130353938633933656134 Jan 16 18:05:12.839000 audit: BPF prog-id=156 op=UNLOAD Jan 16 18:05:12.839000 audit[3858]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3845 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.839000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333133623532646466326635303665393130353938633933656134 Jan 16 18:05:12.839000 audit: BPF prog-id=158 op=LOAD Jan 16 18:05:12.839000 audit[3858]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3845 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.839000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333133623532646466326635303665393130353938633933656134 Jan 16 18:05:12.882046 kubelet[3428]: E0116 18:05:12.881292 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.882046 kubelet[3428]: E0116 18:05:12.881421 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.883544 kubelet[3428]: E0116 18:05:12.882623 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.883544 kubelet[3428]: W0116 18:05:12.882651 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.886159 kubelet[3428]: E0116 18:05:12.883887 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.886159 kubelet[3428]: W0116 18:05:12.883964 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.886159 kubelet[3428]: E0116 18:05:12.884638 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.886159 kubelet[3428]: W0116 18:05:12.884668 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.887524 kubelet[3428]: E0116 18:05:12.886999 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.887524 kubelet[3428]: W0116 18:05:12.887060 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.887727 kubelet[3428]: E0116 18:05:12.887596 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.887727 kubelet[3428]: W0116 18:05:12.887616 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.887727 kubelet[3428]: E0116 18:05:12.887644 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.888579 kubelet[3428]: E0116 18:05:12.887928 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.888579 kubelet[3428]: E0116 18:05:12.887990 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.888579 kubelet[3428]: E0116 18:05:12.888042 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.888579 kubelet[3428]: E0116 18:05:12.888166 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.889413 kubelet[3428]: E0116 18:05:12.889307 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.889413 kubelet[3428]: W0116 18:05:12.889345 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.889413 kubelet[3428]: E0116 18:05:12.889381 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.893679 kubelet[3428]: E0116 18:05:12.891937 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.893679 kubelet[3428]: W0116 18:05:12.891977 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.894165 kubelet[3428]: E0116 18:05:12.893943 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.894321 kubelet[3428]: E0116 18:05:12.894282 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.894608 kubelet[3428]: W0116 18:05:12.894316 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.897326 kubelet[3428]: E0116 18:05:12.896859 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.897326 kubelet[3428]: W0116 18:05:12.896900 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.898371 kubelet[3428]: E0116 18:05:12.898335 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.898883 kubelet[3428]: E0116 18:05:12.898822 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.898883 kubelet[3428]: W0116 18:05:12.898859 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.899056 kubelet[3428]: E0116 18:05:12.898891 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.900193 kubelet[3428]: E0116 18:05:12.900141 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.901411 kubelet[3428]: W0116 18:05:12.900198 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.901544 kubelet[3428]: E0116 18:05:12.901429 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.902457 systemd[1]: Started cri-containerd-27e8dac55229e09e774e303070c9304127d14a5285d64087a9f514e361bd59d7.scope - libcontainer container 27e8dac55229e09e774e303070c9304127d14a5285d64087a9f514e361bd59d7. Jan 16 18:05:12.903684 kubelet[3428]: E0116 18:05:12.903212 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.903684 kubelet[3428]: W0116 18:05:12.903253 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.903684 kubelet[3428]: E0116 18:05:12.903286 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.903684 kubelet[3428]: E0116 18:05:12.903477 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.905750 kubelet[3428]: E0116 18:05:12.905561 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.905750 kubelet[3428]: W0116 18:05:12.905601 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.905750 kubelet[3428]: E0116 18:05:12.905633 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.907935 kubelet[3428]: E0116 18:05:12.907310 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.907935 kubelet[3428]: W0116 18:05:12.907437 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.907935 kubelet[3428]: E0116 18:05:12.907470 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.909296 kubelet[3428]: E0116 18:05:12.909242 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.909296 kubelet[3428]: W0116 18:05:12.909289 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.909469 kubelet[3428]: E0116 18:05:12.909320 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.909870 kubelet[3428]: E0116 18:05:12.909832 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.909969 kubelet[3428]: W0116 18:05:12.909868 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.909969 kubelet[3428]: E0116 18:05:12.909894 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.911619 kubelet[3428]: E0116 18:05:12.911544 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.911619 kubelet[3428]: W0116 18:05:12.911606 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.912346 kubelet[3428]: E0116 18:05:12.911640 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.922003 kubelet[3428]: E0116 18:05:12.921955 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:12.922003 kubelet[3428]: W0116 18:05:12.921992 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:12.922474 kubelet[3428]: E0116 18:05:12.922026 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:12.950000 audit: BPF prog-id=159 op=LOAD Jan 16 18:05:12.951000 audit: BPF prog-id=160 op=LOAD Jan 16 18:05:12.951000 audit[3945]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000206180 a2=98 a3=0 items=0 ppid=3934 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237653864616335353232396530396537373465333033303730633933 Jan 16 18:05:12.952000 audit: BPF prog-id=160 op=UNLOAD Jan 16 18:05:12.952000 audit[3945]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237653864616335353232396530396537373465333033303730633933 Jan 16 18:05:12.952000 audit: BPF prog-id=161 op=LOAD Jan 16 18:05:12.952000 audit[3945]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002063e8 a2=98 a3=0 items=0 ppid=3934 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237653864616335353232396530396537373465333033303730633933 Jan 16 18:05:12.952000 audit: BPF prog-id=162 op=LOAD Jan 16 18:05:12.952000 audit[3945]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000206168 a2=98 a3=0 items=0 ppid=3934 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237653864616335353232396530396537373465333033303730633933 Jan 16 18:05:12.952000 audit: BPF prog-id=162 op=UNLOAD Jan 16 18:05:12.952000 audit[3945]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237653864616335353232396530396537373465333033303730633933 Jan 16 18:05:12.952000 audit: BPF prog-id=161 op=UNLOAD Jan 16 18:05:12.952000 audit[3945]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237653864616335353232396530396537373465333033303730633933 Jan 16 18:05:12.952000 audit: BPF prog-id=163 op=LOAD Jan 16 18:05:12.952000 audit[3945]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000206648 a2=98 a3=0 items=0 ppid=3934 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:12.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237653864616335353232396530396537373465333033303730633933 Jan 16 18:05:12.986211 containerd[1954]: time="2026-01-16T18:05:12.986140516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77ffc79fcf-lgdpr,Uid:47fc5387-2c1a-4350-8af8-7d9c13b38ca5,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa313b52ddf2f506e910598c93ea4a4e7a72cc0888e60d44f668dbb599dd930a\"" Jan 16 18:05:12.991967 containerd[1954]: time="2026-01-16T18:05:12.990953092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 16 18:05:13.006651 containerd[1954]: time="2026-01-16T18:05:13.006594636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8g8zx,Uid:1fb45c3f-1b3a-4ab6-9f1c-39354b5fc20d,Namespace:calico-system,Attempt:0,} returns sandbox id \"27e8dac55229e09e774e303070c9304127d14a5285d64087a9f514e361bd59d7\"" Jan 16 18:05:13.304000 audit[4004]: NETFILTER_CFG table=filter:121 family=2 entries=22 op=nft_register_rule pid=4004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:13.304000 audit[4004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffdedc760 a2=0 a3=1 items=0 ppid=3574 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:13.304000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:13.306000 audit[4004]: NETFILTER_CFG table=nat:122 family=2 entries=12 op=nft_register_rule pid=4004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:13.306000 audit[4004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffdedc760 a2=0 a3=1 items=0 ppid=3574 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:13.306000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:14.385745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2998895392.mount: Deactivated successfully. Jan 16 18:05:14.733893 kubelet[3428]: E0116 18:05:14.733720 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88slq" podUID="74813863-8ca6-40d9-bd92-5b37511fc2e0" Jan 16 18:05:15.612381 containerd[1954]: time="2026-01-16T18:05:15.612304961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:05:15.614907 containerd[1954]: time="2026-01-16T18:05:15.614822717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 16 18:05:15.616986 containerd[1954]: time="2026-01-16T18:05:15.616917977Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:05:15.622220 containerd[1954]: time="2026-01-16T18:05:15.622103405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:05:15.623702 containerd[1954]: time="2026-01-16T18:05:15.622863305Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.631845137s" Jan 16 18:05:15.623702 containerd[1954]: time="2026-01-16T18:05:15.622919885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 16 18:05:15.626880 containerd[1954]: time="2026-01-16T18:05:15.626831537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 16 18:05:15.651834 containerd[1954]: time="2026-01-16T18:05:15.651753665Z" level=info msg="CreateContainer within sandbox \"fa313b52ddf2f506e910598c93ea4a4e7a72cc0888e60d44f668dbb599dd930a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 16 18:05:15.676280 containerd[1954]: time="2026-01-16T18:05:15.676196429Z" level=info msg="Container 96b0f3604408997306db2f18162b7ced06c96a3002231d41af7d2b766989325f: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:05:15.687268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount899529811.mount: Deactivated successfully. Jan 16 18:05:15.698372 containerd[1954]: time="2026-01-16T18:05:15.698265450Z" level=info msg="CreateContainer within sandbox \"fa313b52ddf2f506e910598c93ea4a4e7a72cc0888e60d44f668dbb599dd930a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"96b0f3604408997306db2f18162b7ced06c96a3002231d41af7d2b766989325f\"" Jan 16 18:05:15.700102 containerd[1954]: time="2026-01-16T18:05:15.700032078Z" level=info msg="StartContainer for \"96b0f3604408997306db2f18162b7ced06c96a3002231d41af7d2b766989325f\"" Jan 16 18:05:15.705389 containerd[1954]: time="2026-01-16T18:05:15.705200670Z" level=info msg="connecting to shim 96b0f3604408997306db2f18162b7ced06c96a3002231d41af7d2b766989325f" address="unix:///run/containerd/s/af0bba6825e5b6c8d91f73bceace28fa4571ee6789f68e9447216711d189068d" protocol=ttrpc version=3 Jan 16 18:05:15.744471 systemd[1]: Started cri-containerd-96b0f3604408997306db2f18162b7ced06c96a3002231d41af7d2b766989325f.scope - libcontainer container 96b0f3604408997306db2f18162b7ced06c96a3002231d41af7d2b766989325f. Jan 16 18:05:15.773000 audit: BPF prog-id=164 op=LOAD Jan 16 18:05:15.774000 audit: BPF prog-id=165 op=LOAD Jan 16 18:05:15.774000 audit[4016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3845 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:15.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936623066333630343430383939373330366462326631383136326237 Jan 16 18:05:15.774000 audit: BPF prog-id=165 op=UNLOAD Jan 16 18:05:15.774000 audit[4016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3845 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:15.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936623066333630343430383939373330366462326631383136326237 Jan 16 18:05:15.775000 audit: BPF prog-id=166 op=LOAD Jan 16 18:05:15.775000 audit[4016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3845 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:15.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936623066333630343430383939373330366462326631383136326237 Jan 16 18:05:15.775000 audit: BPF prog-id=167 op=LOAD Jan 16 18:05:15.775000 audit[4016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3845 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:15.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936623066333630343430383939373330366462326631383136326237 Jan 16 18:05:15.775000 audit: BPF prog-id=167 op=UNLOAD Jan 16 18:05:15.775000 audit[4016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3845 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:15.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936623066333630343430383939373330366462326631383136326237 Jan 16 18:05:15.775000 audit: BPF prog-id=166 op=UNLOAD Jan 16 18:05:15.775000 audit[4016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3845 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:15.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936623066333630343430383939373330366462326631383136326237 Jan 16 18:05:15.775000 audit: BPF prog-id=168 op=LOAD Jan 16 18:05:15.775000 audit[4016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3845 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:15.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936623066333630343430383939373330366462326631383136326237 Jan 16 18:05:15.835669 containerd[1954]: time="2026-01-16T18:05:15.835554798Z" level=info msg="StartContainer for \"96b0f3604408997306db2f18162b7ced06c96a3002231d41af7d2b766989325f\" returns successfully" Jan 16 18:05:15.969235 kubelet[3428]: I0116 18:05:15.967901 3428 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77ffc79fcf-lgdpr" podStartSLOduration=1.333623942 podStartE2EDuration="3.967879699s" podCreationTimestamp="2026-01-16 18:05:12 +0000 UTC" firstStartedPulling="2026-01-16 18:05:12.990553984 +0000 UTC m=+33.548252868" lastFinishedPulling="2026-01-16 18:05:15.624809753 +0000 UTC m=+36.182508625" observedRunningTime="2026-01-16 18:05:15.966977707 +0000 UTC m=+36.524676615" watchObservedRunningTime="2026-01-16 18:05:15.967879699 +0000 UTC m=+36.525578583" Jan 16 18:05:15.972493 kubelet[3428]: E0116 18:05:15.970179 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:15.972493 kubelet[3428]: W0116 18:05:15.970336 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:15.972493 kubelet[3428]: E0116 18:05:15.970370 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:15.972493 kubelet[3428]: E0116 18:05:15.971522 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:15.972493 kubelet[3428]: W0116 18:05:15.971551 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:15.972493 kubelet[3428]: E0116 18:05:15.971850 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:15.974150 kubelet[3428]: E0116 18:05:15.973938 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:15.974296 kubelet[3428]: W0116 18:05:15.974269 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:15.974593 kubelet[3428]: E0116 18:05:15.974305 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:15.975788 kubelet[3428]: E0116 18:05:15.975740 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:15.975788 kubelet[3428]: W0116 18:05:15.975778 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:15.976313 kubelet[3428]: E0116 18:05:15.975811 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:15.977847 kubelet[3428]: E0116 18:05:15.977669 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:15.977847 kubelet[3428]: W0116 18:05:15.977834 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:15.978197 kubelet[3428]: E0116 18:05:15.978005 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:15.979537 kubelet[3428]: E0116 18:05:15.979482 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:15.979689 kubelet[3428]: W0116 18:05:15.979524 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:15.979749 kubelet[3428]: E0116 18:05:15.979695 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:15.982270 kubelet[3428]: E0116 18:05:15.981804 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:15.982270 kubelet[3428]: W0116 18:05:15.981869 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:15.982270 kubelet[3428]: E0116 18:05:15.981901 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:15.983500 kubelet[3428]: E0116 18:05:15.983355 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:15.983738 kubelet[3428]: W0116 18:05:15.983591 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:15.983738 kubelet[3428]: E0116 18:05:15.983627 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:15.984553 kubelet[3428]: E0116 18:05:15.984521 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:15.985412 kubelet[3428]: W0116 18:05:15.984698 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:15.985412 kubelet[3428]: E0116 18:05:15.985203 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:15.985849 kubelet[3428]: E0116 18:05:15.985804 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:15.986142 kubelet[3428]: W0116 18:05:15.986083 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:15.986667 kubelet[3428]: E0116 18:05:15.986376 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:15.987298 kubelet[3428]: E0116 18:05:15.987263 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:15.987885 kubelet[3428]: W0116 18:05:15.987555 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:15.987885 kubelet[3428]: E0116 18:05:15.987601 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:15.989223 kubelet[3428]: E0116 18:05:15.988600 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:15.989223 kubelet[3428]: W0116 18:05:15.988628 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:15.989223 kubelet[3428]: E0116 18:05:15.988655 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:15.990648 kubelet[3428]: E0116 18:05:15.990338 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:15.991458 kubelet[3428]: W0116 18:05:15.991110 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:15.991458 kubelet[3428]: E0116 18:05:15.991193 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:15.992299 kubelet[3428]: E0116 18:05:15.992242 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:15.992990 kubelet[3428]: W0116 18:05:15.992484 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:15.992990 kubelet[3428]: E0116 18:05:15.992521 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:15.995256 kubelet[3428]: E0116 18:05:15.994918 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:15.995256 kubelet[3428]: W0116 18:05:15.994989 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:15.995256 kubelet[3428]: E0116 18:05:15.995025 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.008237 kubelet[3428]: E0116 18:05:16.008148 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.008237 kubelet[3428]: W0116 18:05:16.008186 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.008691 kubelet[3428]: E0116 18:05:16.008340 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.009522 kubelet[3428]: E0116 18:05:16.009388 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.009522 kubelet[3428]: W0116 18:05:16.009467 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.010068 kubelet[3428]: E0116 18:05:16.010041 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.010361 kubelet[3428]: E0116 18:05:16.010228 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.010361 kubelet[3428]: W0116 18:05:16.010327 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.010518 kubelet[3428]: E0116 18:05:16.010375 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.010728 kubelet[3428]: E0116 18:05:16.010701 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.010836 kubelet[3428]: W0116 18:05:16.010728 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.010836 kubelet[3428]: E0116 18:05:16.010761 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.011630 kubelet[3428]: E0116 18:05:16.011428 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.011630 kubelet[3428]: W0116 18:05:16.011460 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.011630 kubelet[3428]: E0116 18:05:16.011501 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.013459 kubelet[3428]: E0116 18:05:16.013389 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.013748 kubelet[3428]: W0116 18:05:16.013558 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.013748 kubelet[3428]: E0116 18:05:16.013634 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.014994 kubelet[3428]: E0116 18:05:16.014824 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.015475 kubelet[3428]: W0116 18:05:16.015095 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.015475 kubelet[3428]: E0116 18:05:16.015388 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.016414 kubelet[3428]: E0116 18:05:16.016314 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.016414 kubelet[3428]: W0116 18:05:16.016347 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.016756 kubelet[3428]: E0116 18:05:16.016658 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.017761 kubelet[3428]: E0116 18:05:16.017625 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.017761 kubelet[3428]: W0116 18:05:16.017677 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.018388 kubelet[3428]: E0116 18:05:16.017784 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.018749 kubelet[3428]: E0116 18:05:16.018695 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.018947 kubelet[3428]: W0116 18:05:16.018826 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.018947 kubelet[3428]: E0116 18:05:16.018898 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.020600 kubelet[3428]: E0116 18:05:16.020514 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.020600 kubelet[3428]: W0116 18:05:16.020657 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.021386 kubelet[3428]: E0116 18:05:16.020730 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.023442 kubelet[3428]: E0116 18:05:16.022271 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.023442 kubelet[3428]: W0116 18:05:16.022306 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.023815 kubelet[3428]: E0116 18:05:16.023782 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.024930 kubelet[3428]: E0116 18:05:16.024847 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.024930 kubelet[3428]: W0116 18:05:16.024881 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.025488 kubelet[3428]: E0116 18:05:16.025344 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.026131 kubelet[3428]: E0116 18:05:16.026049 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.026572 kubelet[3428]: W0116 18:05:16.026078 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.026572 kubelet[3428]: E0116 18:05:16.026398 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.027324 kubelet[3428]: E0116 18:05:16.027232 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.028016 kubelet[3428]: W0116 18:05:16.027263 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.028243 kubelet[3428]: E0116 18:05:16.028182 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.029824 kubelet[3428]: E0116 18:05:16.029235 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.029824 kubelet[3428]: W0116 18:05:16.029267 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.029824 kubelet[3428]: E0116 18:05:16.029298 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.030564 kubelet[3428]: E0116 18:05:16.030530 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.030752 kubelet[3428]: W0116 18:05:16.030724 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.030883 kubelet[3428]: E0116 18:05:16.030859 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.031775 kubelet[3428]: E0116 18:05:16.031713 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:16.031775 kubelet[3428]: W0116 18:05:16.031752 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:16.031961 kubelet[3428]: E0116 18:05:16.031795 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:16.733687 kubelet[3428]: E0116 18:05:16.733576 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88slq" podUID="74813863-8ca6-40d9-bd92-5b37511fc2e0" Jan 16 18:05:16.879710 containerd[1954]: time="2026-01-16T18:05:16.879628807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:05:16.881955 containerd[1954]: time="2026-01-16T18:05:16.881632135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:16.884065 containerd[1954]: time="2026-01-16T18:05:16.884008855Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:05:16.889057 containerd[1954]: time="2026-01-16T18:05:16.888991892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:05:16.890418 containerd[1954]: time="2026-01-16T18:05:16.890192276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.263007555s" Jan 16 18:05:16.890418 containerd[1954]: time="2026-01-16T18:05:16.890249084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 16 18:05:16.899076 containerd[1954]: time="2026-01-16T18:05:16.899012756Z" level=info msg="CreateContainer within sandbox \"27e8dac55229e09e774e303070c9304127d14a5285d64087a9f514e361bd59d7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 16 18:05:16.922611 containerd[1954]: time="2026-01-16T18:05:16.922473524Z" level=info msg="Container e5b23be0c9e8ea397a460dd23c134e40df65efe41f78eaa5f59ac4651a0d604b: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:05:16.946076 containerd[1954]: time="2026-01-16T18:05:16.945973988Z" level=info msg="CreateContainer within sandbox \"27e8dac55229e09e774e303070c9304127d14a5285d64087a9f514e361bd59d7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e5b23be0c9e8ea397a460dd23c134e40df65efe41f78eaa5f59ac4651a0d604b\"" Jan 16 18:05:16.947374 containerd[1954]: time="2026-01-16T18:05:16.946933556Z" level=info msg="StartContainer for \"e5b23be0c9e8ea397a460dd23c134e40df65efe41f78eaa5f59ac4651a0d604b\"" Jan 16 18:05:16.952704 containerd[1954]: time="2026-01-16T18:05:16.952626644Z" level=info msg="connecting to shim e5b23be0c9e8ea397a460dd23c134e40df65efe41f78eaa5f59ac4651a0d604b" address="unix:///run/containerd/s/70494f1765970de2e9965b3efe6d8c870947b9ecfe91c1d62c7153ce1f247f09" protocol=ttrpc version=3 Jan 16 18:05:17.002746 kubelet[3428]: E0116 18:05:17.002012 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.002746 kubelet[3428]: W0116 18:05:17.002332 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.002746 kubelet[3428]: E0116 18:05:17.002375 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.004433 kubelet[3428]: E0116 18:05:17.004282 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.004433 kubelet[3428]: W0116 18:05:17.004314 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.004433 kubelet[3428]: E0116 18:05:17.004345 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.005028 kubelet[3428]: E0116 18:05:17.005004 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.005243 kubelet[3428]: W0116 18:05:17.005177 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.005243 kubelet[3428]: E0116 18:05:17.005213 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.006079 kubelet[3428]: E0116 18:05:17.005939 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.006079 kubelet[3428]: W0116 18:05:17.005966 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.006079 kubelet[3428]: E0116 18:05:17.005990 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.006657 kubelet[3428]: E0116 18:05:17.006627 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.006854 kubelet[3428]: W0116 18:05:17.006749 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.006854 kubelet[3428]: E0116 18:05:17.006780 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.007375 kubelet[3428]: E0116 18:05:17.007250 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.007375 kubelet[3428]: W0116 18:05:17.007274 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.007375 kubelet[3428]: E0116 18:05:17.007298 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.008296 kubelet[3428]: E0116 18:05:17.008265 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.008595 kubelet[3428]: W0116 18:05:17.008419 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.008595 kubelet[3428]: E0116 18:05:17.008495 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.009073 kubelet[3428]: E0116 18:05:17.008983 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.009073 kubelet[3428]: W0116 18:05:17.009007 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.009514 kubelet[3428]: E0116 18:05:17.009385 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.009938 kubelet[3428]: E0116 18:05:17.009906 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.010214 kubelet[3428]: W0116 18:05:17.010042 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.010214 kubelet[3428]: E0116 18:05:17.010083 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.010646 kubelet[3428]: E0116 18:05:17.010620 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.010891 kubelet[3428]: W0116 18:05:17.010775 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.010891 kubelet[3428]: E0116 18:05:17.010809 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.011458 kubelet[3428]: E0116 18:05:17.011331 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.011458 kubelet[3428]: W0116 18:05:17.011356 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.011458 kubelet[3428]: E0116 18:05:17.011382 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.011910 kubelet[3428]: E0116 18:05:17.011884 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.012187 kubelet[3428]: W0116 18:05:17.012012 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.012187 kubelet[3428]: E0116 18:05:17.012043 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.012618 kubelet[3428]: E0116 18:05:17.012586 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.012797 kubelet[3428]: W0116 18:05:17.012715 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.012985 kubelet[3428]: E0116 18:05:17.012872 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.013467 kubelet[3428]: E0116 18:05:17.013435 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.013739 kubelet[3428]: W0116 18:05:17.013611 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.013739 kubelet[3428]: E0116 18:05:17.013648 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.014197 kubelet[3428]: E0116 18:05:17.014170 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.014417 kubelet[3428]: W0116 18:05:17.014304 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.014417 kubelet[3428]: E0116 18:05:17.014335 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.018707 kubelet[3428]: E0116 18:05:17.018591 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.018707 kubelet[3428]: W0116 18:05:17.018649 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.019111 kubelet[3428]: E0116 18:05:17.018678 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.019858 kubelet[3428]: E0116 18:05:17.019769 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.019858 kubelet[3428]: W0116 18:05:17.019825 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.020239 kubelet[3428]: E0116 18:05:17.020092 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.020866 kubelet[3428]: E0116 18:05:17.020717 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.020866 kubelet[3428]: W0116 18:05:17.020799 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.020866 kubelet[3428]: E0116 18:05:17.020828 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.021678 kubelet[3428]: E0116 18:05:17.021625 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.022003 kubelet[3428]: W0116 18:05:17.021828 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.022003 kubelet[3428]: E0116 18:05:17.021869 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.022690 kubelet[3428]: E0116 18:05:17.022593 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.022690 kubelet[3428]: W0116 18:05:17.022652 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.023104 kubelet[3428]: E0116 18:05:17.022987 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.022877 systemd[1]: Started cri-containerd-e5b23be0c9e8ea397a460dd23c134e40df65efe41f78eaa5f59ac4651a0d604b.scope - libcontainer container e5b23be0c9e8ea397a460dd23c134e40df65efe41f78eaa5f59ac4651a0d604b. Jan 16 18:05:17.024205 kubelet[3428]: E0116 18:05:17.024004 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.024205 kubelet[3428]: W0116 18:05:17.024163 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.024688 kubelet[3428]: E0116 18:05:17.024533 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.025141 kubelet[3428]: E0116 18:05:17.025012 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.025141 kubelet[3428]: W0116 18:05:17.025089 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.025461 kubelet[3428]: E0116 18:05:17.025392 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.025976 kubelet[3428]: E0116 18:05:17.025895 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.025976 kubelet[3428]: W0116 18:05:17.025943 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.026341 kubelet[3428]: E0116 18:05:17.026267 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.027398 kubelet[3428]: E0116 18:05:17.026954 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.027398 kubelet[3428]: W0116 18:05:17.026980 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.027630 kubelet[3428]: E0116 18:05:17.027571 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.028350 kubelet[3428]: E0116 18:05:17.028167 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.028350 kubelet[3428]: W0116 18:05:17.028196 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.028901 kubelet[3428]: E0116 18:05:17.028715 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.028901 kubelet[3428]: W0116 18:05:17.028739 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.029822 kubelet[3428]: E0116 18:05:17.029337 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.029822 kubelet[3428]: W0116 18:05:17.029365 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.029822 kubelet[3428]: E0116 18:05:17.029394 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.030737 kubelet[3428]: E0116 18:05:17.030709 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.030884 kubelet[3428]: W0116 18:05:17.030858 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.031006 kubelet[3428]: E0116 18:05:17.030982 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.031197 kubelet[3428]: E0116 18:05:17.031172 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.031769 kubelet[3428]: E0116 18:05:17.031739 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.032207 kubelet[3428]: W0116 18:05:17.031832 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.032207 kubelet[3428]: E0116 18:05:17.031863 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.032986 kubelet[3428]: E0116 18:05:17.032742 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.032986 kubelet[3428]: W0116 18:05:17.032770 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.032986 kubelet[3428]: E0116 18:05:17.032799 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.034790 kubelet[3428]: E0116 18:05:17.034300 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.034790 kubelet[3428]: W0116 18:05:17.034331 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.034790 kubelet[3428]: E0116 18:05:17.034360 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.034790 kubelet[3428]: E0116 18:05:17.034403 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.036352 kubelet[3428]: E0116 18:05:17.035973 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.036352 kubelet[3428]: W0116 18:05:17.036008 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.037090 kubelet[3428]: E0116 18:05:17.036606 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.056758 kubelet[3428]: E0116 18:05:17.056701 3428 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:05:17.057153 kubelet[3428]: W0116 18:05:17.056938 3428 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:05:17.057153 kubelet[3428]: E0116 18:05:17.056981 3428 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:05:17.090000 audit[4150]: NETFILTER_CFG table=filter:123 family=2 entries=21 op=nft_register_rule pid=4150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:17.090000 audit[4150]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff5b9c480 a2=0 a3=1 items=0 ppid=3574 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:17.090000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:17.096000 audit[4150]: NETFILTER_CFG table=nat:124 family=2 entries=19 op=nft_register_chain pid=4150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:17.096000 audit[4150]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff5b9c480 a2=0 a3=1 items=0 ppid=3574 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:17.096000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:17.146000 audit: BPF prog-id=169 op=LOAD Jan 16 18:05:17.146000 audit[4094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3934 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:17.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623233626530633965386561333937613436306464323363313334 Jan 16 18:05:17.146000 audit: BPF prog-id=170 op=LOAD Jan 16 18:05:17.146000 audit[4094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3934 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:17.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623233626530633965386561333937613436306464323363313334 Jan 16 18:05:17.146000 audit: BPF prog-id=170 op=UNLOAD Jan 16 18:05:17.146000 audit[4094]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:17.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623233626530633965386561333937613436306464323363313334 Jan 16 18:05:17.146000 audit: BPF prog-id=169 op=UNLOAD Jan 16 18:05:17.146000 audit[4094]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:17.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623233626530633965386561333937613436306464323363313334 Jan 16 18:05:17.146000 audit: BPF prog-id=171 op=LOAD Jan 16 18:05:17.146000 audit[4094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3934 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:17.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623233626530633965386561333937613436306464323363313334 Jan 16 18:05:17.187170 containerd[1954]: time="2026-01-16T18:05:17.184793753Z" level=info msg="StartContainer for \"e5b23be0c9e8ea397a460dd23c134e40df65efe41f78eaa5f59ac4651a0d604b\" returns successfully" Jan 16 18:05:17.213835 systemd[1]: cri-containerd-e5b23be0c9e8ea397a460dd23c134e40df65efe41f78eaa5f59ac4651a0d604b.scope: Deactivated successfully. Jan 16 18:05:17.217000 audit: BPF prog-id=171 op=UNLOAD Jan 16 18:05:17.226917 containerd[1954]: time="2026-01-16T18:05:17.226715789Z" level=info msg="received container exit event container_id:\"e5b23be0c9e8ea397a460dd23c134e40df65efe41f78eaa5f59ac4651a0d604b\" id:\"e5b23be0c9e8ea397a460dd23c134e40df65efe41f78eaa5f59ac4651a0d604b\" pid:4142 exited_at:{seconds:1768586717 nanos:222840017}" Jan 16 18:05:17.268906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5b23be0c9e8ea397a460dd23c134e40df65efe41f78eaa5f59ac4651a0d604b-rootfs.mount: Deactivated successfully. Jan 16 18:05:17.965730 containerd[1954]: time="2026-01-16T18:05:17.964871025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 16 18:05:18.733629 kubelet[3428]: E0116 18:05:18.733541 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88slq" podUID="74813863-8ca6-40d9-bd92-5b37511fc2e0" Jan 16 18:05:20.733792 kubelet[3428]: E0116 18:05:20.733730 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88slq" podUID="74813863-8ca6-40d9-bd92-5b37511fc2e0" Jan 16 18:05:20.853228 containerd[1954]: time="2026-01-16T18:05:20.853095815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:05:20.854689 containerd[1954]: time="2026-01-16T18:05:20.854610575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 16 18:05:20.855960 containerd[1954]: time="2026-01-16T18:05:20.855495203Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:05:20.859429 containerd[1954]: time="2026-01-16T18:05:20.859380323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:05:20.860619 containerd[1954]: time="2026-01-16T18:05:20.860560391Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.89553627s" Jan 16 18:05:20.860727 containerd[1954]: time="2026-01-16T18:05:20.860615279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 16 18:05:20.868562 containerd[1954]: time="2026-01-16T18:05:20.867561935Z" level=info msg="CreateContainer within sandbox \"27e8dac55229e09e774e303070c9304127d14a5285d64087a9f514e361bd59d7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 16 18:05:20.884501 containerd[1954]: time="2026-01-16T18:05:20.884447843Z" level=info msg="Container b56ad6f6ac77b29259da4add747bef57df92ddd6e7f68b93e75c0b0a94e83353: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:05:20.892258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1200769230.mount: Deactivated successfully. Jan 16 18:05:20.904446 containerd[1954]: time="2026-01-16T18:05:20.904392839Z" level=info msg="CreateContainer within sandbox \"27e8dac55229e09e774e303070c9304127d14a5285d64087a9f514e361bd59d7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b56ad6f6ac77b29259da4add747bef57df92ddd6e7f68b93e75c0b0a94e83353\"" Jan 16 18:05:20.906587 containerd[1954]: time="2026-01-16T18:05:20.906540863Z" level=info msg="StartContainer for \"b56ad6f6ac77b29259da4add747bef57df92ddd6e7f68b93e75c0b0a94e83353\"" Jan 16 18:05:20.912896 containerd[1954]: time="2026-01-16T18:05:20.912845171Z" level=info msg="connecting to shim b56ad6f6ac77b29259da4add747bef57df92ddd6e7f68b93e75c0b0a94e83353" address="unix:///run/containerd/s/70494f1765970de2e9965b3efe6d8c870947b9ecfe91c1d62c7153ce1f247f09" protocol=ttrpc version=3 Jan 16 18:05:20.952516 systemd[1]: Started cri-containerd-b56ad6f6ac77b29259da4add747bef57df92ddd6e7f68b93e75c0b0a94e83353.scope - libcontainer container b56ad6f6ac77b29259da4add747bef57df92ddd6e7f68b93e75c0b0a94e83353. Jan 16 18:05:21.032000 audit: BPF prog-id=172 op=LOAD Jan 16 18:05:21.035218 kernel: kauditd_printk_skb: 84 callbacks suppressed Jan 16 18:05:21.035352 kernel: audit: type=1334 audit(1768586721.032:574): prog-id=172 op=LOAD Jan 16 18:05:21.032000 audit[4192]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3934 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:21.044030 kernel: audit: type=1300 audit(1768586721.032:574): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3934 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:21.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235366164366636616337376232393235396461346164643734376265 Jan 16 18:05:21.050481 kernel: audit: type=1327 audit(1768586721.032:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235366164366636616337376232393235396461346164643734376265 Jan 16 18:05:21.050599 kernel: audit: type=1334 audit(1768586721.033:575): prog-id=173 op=LOAD Jan 16 18:05:21.033000 audit: BPF prog-id=173 op=LOAD Jan 16 18:05:21.033000 audit[4192]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3934 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:21.059105 kernel: audit: type=1300 audit(1768586721.033:575): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3934 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:21.059335 kernel: audit: type=1327 audit(1768586721.033:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235366164366636616337376232393235396461346164643734376265 Jan 16 18:05:21.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235366164366636616337376232393235396461346164643734376265 Jan 16 18:05:21.035000 audit: BPF prog-id=173 op=UNLOAD Jan 16 18:05:21.067071 kernel: audit: type=1334 audit(1768586721.035:576): prog-id=173 op=UNLOAD Jan 16 18:05:21.035000 audit[4192]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:21.076530 kernel: audit: type=1300 audit(1768586721.035:576): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:21.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235366164366636616337376232393235396461346164643734376265 Jan 16 18:05:21.083245 kernel: audit: type=1327 audit(1768586721.035:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235366164366636616337376232393235396461346164643734376265 Jan 16 18:05:21.035000 audit: BPF prog-id=172 op=UNLOAD Jan 16 18:05:21.085438 kernel: audit: type=1334 audit(1768586721.035:577): prog-id=172 op=UNLOAD Jan 16 18:05:21.035000 audit[4192]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:21.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235366164366636616337376232393235396461346164643734376265 Jan 16 18:05:21.035000 audit: BPF prog-id=174 op=LOAD Jan 16 18:05:21.035000 audit[4192]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3934 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:21.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235366164366636616337376232393235396461346164643734376265 Jan 16 18:05:21.113992 containerd[1954]: time="2026-01-16T18:05:21.113915216Z" level=info msg="StartContainer for \"b56ad6f6ac77b29259da4add747bef57df92ddd6e7f68b93e75c0b0a94e83353\" returns successfully" Jan 16 18:05:22.040317 containerd[1954]: time="2026-01-16T18:05:22.040238817Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 18:05:22.045195 systemd[1]: cri-containerd-b56ad6f6ac77b29259da4add747bef57df92ddd6e7f68b93e75c0b0a94e83353.scope: Deactivated successfully. Jan 16 18:05:22.046285 systemd[1]: cri-containerd-b56ad6f6ac77b29259da4add747bef57df92ddd6e7f68b93e75c0b0a94e83353.scope: Consumed 942ms CPU time, 192.2M memory peak, 165.9M written to disk. Jan 16 18:05:22.049609 containerd[1954]: time="2026-01-16T18:05:22.049388877Z" level=info msg="received container exit event container_id:\"b56ad6f6ac77b29259da4add747bef57df92ddd6e7f68b93e75c0b0a94e83353\" id:\"b56ad6f6ac77b29259da4add747bef57df92ddd6e7f68b93e75c0b0a94e83353\" pid:4207 exited_at:{seconds:1768586722 nanos:48805569}" Jan 16 18:05:22.049000 audit: BPF prog-id=174 op=UNLOAD Jan 16 18:05:22.095211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b56ad6f6ac77b29259da4add747bef57df92ddd6e7f68b93e75c0b0a94e83353-rootfs.mount: Deactivated successfully. Jan 16 18:05:22.144159 kubelet[3428]: I0116 18:05:22.141848 3428 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 16 18:05:22.221108 systemd[1]: Created slice kubepods-burstable-pod73afd704_2c12_49ff_a165_c96fb49c636e.slice - libcontainer container kubepods-burstable-pod73afd704_2c12_49ff_a165_c96fb49c636e.slice. Jan 16 18:05:22.223368 kubelet[3428]: I0116 18:05:22.221929 3428 status_manager.go:890] "Failed to get status for pod" podUID="73afd704-2c12-49ff-a165-c96fb49c636e" pod="kube-system/coredns-668d6bf9bc-z6xnx" err="pods \"coredns-668d6bf9bc-z6xnx\" is forbidden: User \"system:node:ip-172-31-22-249\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-22-249' and this object" Jan 16 18:05:22.223368 kubelet[3428]: W0116 18:05:22.222262 3428 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-22-249" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-249' and this object Jan 16 18:05:22.223368 kubelet[3428]: E0116 18:05:22.222310 3428 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-22-249\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-22-249' and this object" logger="UnhandledError" Jan 16 18:05:22.254356 systemd[1]: Created slice kubepods-besteffort-podd70fefd9_029e_4ddf_8559_5e71028d4fd0.slice - libcontainer container kubepods-besteffort-podd70fefd9_029e_4ddf_8559_5e71028d4fd0.slice. Jan 16 18:05:22.260609 kubelet[3428]: I0116 18:05:22.258459 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0aceb407-9a32-413e-8736-06bca676c39e-whisker-backend-key-pair\") pod \"whisker-7d554d9f85-j7g4v\" (UID: \"0aceb407-9a32-413e-8736-06bca676c39e\") " pod="calico-system/whisker-7d554d9f85-j7g4v" Jan 16 18:05:22.260609 kubelet[3428]: I0116 18:05:22.258527 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73afd704-2c12-49ff-a165-c96fb49c636e-config-volume\") pod \"coredns-668d6bf9bc-z6xnx\" (UID: \"73afd704-2c12-49ff-a165-c96fb49c636e\") " pod="kube-system/coredns-668d6bf9bc-z6xnx" Jan 16 18:05:22.260609 kubelet[3428]: I0116 18:05:22.258566 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjnjs\" (UniqueName: \"kubernetes.io/projected/73afd704-2c12-49ff-a165-c96fb49c636e-kube-api-access-xjnjs\") pod \"coredns-668d6bf9bc-z6xnx\" (UID: \"73afd704-2c12-49ff-a165-c96fb49c636e\") " pod="kube-system/coredns-668d6bf9bc-z6xnx" Jan 16 18:05:22.260609 kubelet[3428]: I0116 18:05:22.258613 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d57dc7e-5c0c-47c3-a61e-c7073133e9a5-config-volume\") pod \"coredns-668d6bf9bc-cbrj8\" (UID: \"1d57dc7e-5c0c-47c3-a61e-c7073133e9a5\") " pod="kube-system/coredns-668d6bf9bc-cbrj8" Jan 16 18:05:22.260609 kubelet[3428]: I0116 18:05:22.258649 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87hrp\" (UniqueName: \"kubernetes.io/projected/1d57dc7e-5c0c-47c3-a61e-c7073133e9a5-kube-api-access-87hrp\") pod \"coredns-668d6bf9bc-cbrj8\" (UID: \"1d57dc7e-5c0c-47c3-a61e-c7073133e9a5\") " pod="kube-system/coredns-668d6bf9bc-cbrj8" Jan 16 18:05:22.261147 kubelet[3428]: I0116 18:05:22.258691 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgjq7\" (UniqueName: \"kubernetes.io/projected/d70fefd9-029e-4ddf-8559-5e71028d4fd0-kube-api-access-hgjq7\") pod \"calico-kube-controllers-76db965947-cmgcq\" (UID: \"d70fefd9-029e-4ddf-8559-5e71028d4fd0\") " pod="calico-system/calico-kube-controllers-76db965947-cmgcq" Jan 16 18:05:22.261147 kubelet[3428]: I0116 18:05:22.258729 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d70fefd9-029e-4ddf-8559-5e71028d4fd0-tigera-ca-bundle\") pod \"calico-kube-controllers-76db965947-cmgcq\" (UID: \"d70fefd9-029e-4ddf-8559-5e71028d4fd0\") " pod="calico-system/calico-kube-controllers-76db965947-cmgcq" Jan 16 18:05:22.261147 kubelet[3428]: I0116 18:05:22.258779 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58656\" (UniqueName: \"kubernetes.io/projected/0aceb407-9a32-413e-8736-06bca676c39e-kube-api-access-58656\") pod \"whisker-7d554d9f85-j7g4v\" (UID: \"0aceb407-9a32-413e-8736-06bca676c39e\") " pod="calico-system/whisker-7d554d9f85-j7g4v" Jan 16 18:05:22.261147 kubelet[3428]: I0116 18:05:22.258821 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0aceb407-9a32-413e-8736-06bca676c39e-whisker-ca-bundle\") pod \"whisker-7d554d9f85-j7g4v\" (UID: \"0aceb407-9a32-413e-8736-06bca676c39e\") " pod="calico-system/whisker-7d554d9f85-j7g4v" Jan 16 18:05:22.276913 systemd[1]: Created slice kubepods-burstable-pod1d57dc7e_5c0c_47c3_a61e_c7073133e9a5.slice - libcontainer container kubepods-burstable-pod1d57dc7e_5c0c_47c3_a61e_c7073133e9a5.slice. Jan 16 18:05:22.289856 kubelet[3428]: W0116 18:05:22.287811 3428 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-22-249" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-22-249' and this object Jan 16 18:05:22.289856 kubelet[3428]: E0116 18:05:22.287877 3428 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-22-249\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ip-172-31-22-249' and this object" logger="UnhandledError" Jan 16 18:05:22.300948 kubelet[3428]: W0116 18:05:22.300804 3428 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-22-249" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-22-249' and this object Jan 16 18:05:22.302237 kubelet[3428]: W0116 18:05:22.301217 3428 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ip-172-31-22-249" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-249' and this object Jan 16 18:05:22.304880 kubelet[3428]: E0116 18:05:22.304176 3428 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ip-172-31-22-249\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-22-249' and this object" logger="UnhandledError" Jan 16 18:05:22.304880 kubelet[3428]: W0116 18:05:22.301288 3428 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ip-172-31-22-249" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-249' and this object Jan 16 18:05:22.304880 kubelet[3428]: E0116 18:05:22.304261 3428 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ip-172-31-22-249\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-22-249' and this object" logger="UnhandledError" Jan 16 18:05:22.304880 kubelet[3428]: W0116 18:05:22.301347 3428 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ip-172-31-22-249" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-249' and this object Jan 16 18:05:22.304880 kubelet[3428]: E0116 18:05:22.304317 3428 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ip-172-31-22-249\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-22-249' and this object" logger="UnhandledError" Jan 16 18:05:22.307413 kubelet[3428]: E0116 18:05:22.302031 3428 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ip-172-31-22-249\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ip-172-31-22-249' and this object" logger="UnhandledError" Jan 16 18:05:22.308061 systemd[1]: Created slice kubepods-besteffort-pod0aceb407_9a32_413e_8736_06bca676c39e.slice - libcontainer container kubepods-besteffort-pod0aceb407_9a32_413e_8736_06bca676c39e.slice. Jan 16 18:05:22.331575 systemd[1]: Created slice kubepods-besteffort-pod20a749f5_b28f_4523_9135_e8877a359519.slice - libcontainer container kubepods-besteffort-pod20a749f5_b28f_4523_9135_e8877a359519.slice. Jan 16 18:05:22.346662 systemd[1]: Created slice kubepods-besteffort-pod3932bc10_72fd_4993_add3_bcb26a36ba2d.slice - libcontainer container kubepods-besteffort-pod3932bc10_72fd_4993_add3_bcb26a36ba2d.slice. Jan 16 18:05:22.360957 kubelet[3428]: I0116 18:05:22.360891 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3932bc10-72fd-4993-add3-bcb26a36ba2d-goldmane-key-pair\") pod \"goldmane-666569f655-t9x2h\" (UID: \"3932bc10-72fd-4993-add3-bcb26a36ba2d\") " pod="calico-system/goldmane-666569f655-t9x2h" Jan 16 18:05:22.361166 kubelet[3428]: I0116 18:05:22.361074 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3932bc10-72fd-4993-add3-bcb26a36ba2d-config\") pod \"goldmane-666569f655-t9x2h\" (UID: \"3932bc10-72fd-4993-add3-bcb26a36ba2d\") " pod="calico-system/goldmane-666569f655-t9x2h" Jan 16 18:05:22.361166 kubelet[3428]: I0116 18:05:22.361138 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3932bc10-72fd-4993-add3-bcb26a36ba2d-goldmane-ca-bundle\") pod \"goldmane-666569f655-t9x2h\" (UID: \"3932bc10-72fd-4993-add3-bcb26a36ba2d\") " pod="calico-system/goldmane-666569f655-t9x2h" Jan 16 18:05:22.361291 kubelet[3428]: I0116 18:05:22.361185 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk276\" (UniqueName: \"kubernetes.io/projected/3932bc10-72fd-4993-add3-bcb26a36ba2d-kube-api-access-kk276\") pod \"goldmane-666569f655-t9x2h\" (UID: \"3932bc10-72fd-4993-add3-bcb26a36ba2d\") " pod="calico-system/goldmane-666569f655-t9x2h" Jan 16 18:05:22.361366 kubelet[3428]: I0116 18:05:22.361334 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkph5\" (UniqueName: \"kubernetes.io/projected/20a749f5-b28f-4523-9135-e8877a359519-kube-api-access-xkph5\") pod \"calico-apiserver-54df9c5477-js6cv\" (UID: \"20a749f5-b28f-4523-9135-e8877a359519\") " pod="calico-apiserver/calico-apiserver-54df9c5477-js6cv" Jan 16 18:05:22.361426 kubelet[3428]: I0116 18:05:22.361375 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8adc081a-39d6-4153-ae00-f3df7e2ba175-calico-apiserver-certs\") pod \"calico-apiserver-54df9c5477-tm29z\" (UID: \"8adc081a-39d6-4153-ae00-f3df7e2ba175\") " pod="calico-apiserver/calico-apiserver-54df9c5477-tm29z" Jan 16 18:05:22.361426 kubelet[3428]: I0116 18:05:22.361415 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/20a749f5-b28f-4523-9135-e8877a359519-calico-apiserver-certs\") pod \"calico-apiserver-54df9c5477-js6cv\" (UID: \"20a749f5-b28f-4523-9135-e8877a359519\") " pod="calico-apiserver/calico-apiserver-54df9c5477-js6cv" Jan 16 18:05:22.361528 kubelet[3428]: I0116 18:05:22.361451 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbr88\" (UniqueName: \"kubernetes.io/projected/8adc081a-39d6-4153-ae00-f3df7e2ba175-kube-api-access-rbr88\") pod \"calico-apiserver-54df9c5477-tm29z\" (UID: \"8adc081a-39d6-4153-ae00-f3df7e2ba175\") " pod="calico-apiserver/calico-apiserver-54df9c5477-tm29z" Jan 16 18:05:22.362716 systemd[1]: Created slice kubepods-besteffort-pod8adc081a_39d6_4153_ae00_f3df7e2ba175.slice - libcontainer container kubepods-besteffort-pod8adc081a_39d6_4153_ae00_f3df7e2ba175.slice. Jan 16 18:05:22.568180 containerd[1954]: time="2026-01-16T18:05:22.568010616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76db965947-cmgcq,Uid:d70fefd9-029e-4ddf-8559-5e71028d4fd0,Namespace:calico-system,Attempt:0,}" Jan 16 18:05:22.628470 containerd[1954]: time="2026-01-16T18:05:22.627475140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d554d9f85-j7g4v,Uid:0aceb407-9a32-413e-8736-06bca676c39e,Namespace:calico-system,Attempt:0,}" Jan 16 18:05:22.749342 systemd[1]: Created slice kubepods-besteffort-pod74813863_8ca6_40d9_bd92_5b37511fc2e0.slice - libcontainer container kubepods-besteffort-pod74813863_8ca6_40d9_bd92_5b37511fc2e0.slice. Jan 16 18:05:22.773507 containerd[1954]: time="2026-01-16T18:05:22.773368669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88slq,Uid:74813863-8ca6-40d9-bd92-5b37511fc2e0,Namespace:calico-system,Attempt:0,}" Jan 16 18:05:22.934483 containerd[1954]: time="2026-01-16T18:05:22.934313558Z" level=error msg="Failed to destroy network for sandbox \"04347548a278d2f51abf474dc6737898a52c4f57ce831374340b2c4e4135aea6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:22.940408 containerd[1954]: time="2026-01-16T18:05:22.940309106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76db965947-cmgcq,Uid:d70fefd9-029e-4ddf-8559-5e71028d4fd0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"04347548a278d2f51abf474dc6737898a52c4f57ce831374340b2c4e4135aea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:22.940745 kubelet[3428]: E0116 18:05:22.940637 3428 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04347548a278d2f51abf474dc6737898a52c4f57ce831374340b2c4e4135aea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:22.940829 kubelet[3428]: E0116 18:05:22.940746 3428 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04347548a278d2f51abf474dc6737898a52c4f57ce831374340b2c4e4135aea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76db965947-cmgcq" Jan 16 18:05:22.940829 kubelet[3428]: E0116 18:05:22.940780 3428 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04347548a278d2f51abf474dc6737898a52c4f57ce831374340b2c4e4135aea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76db965947-cmgcq" Jan 16 18:05:22.940968 kubelet[3428]: E0116 18:05:22.940845 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76db965947-cmgcq_calico-system(d70fefd9-029e-4ddf-8559-5e71028d4fd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76db965947-cmgcq_calico-system(d70fefd9-029e-4ddf-8559-5e71028d4fd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04347548a278d2f51abf474dc6737898a52c4f57ce831374340b2c4e4135aea6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76db965947-cmgcq" podUID="d70fefd9-029e-4ddf-8559-5e71028d4fd0" Jan 16 18:05:22.952701 containerd[1954]: time="2026-01-16T18:05:22.952516310Z" level=error msg="Failed to destroy network for sandbox \"922729f6fd1db1d56e3846009db869b38afebc5d3e31b7e02cb1ebd0f59f3bac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:22.957444 containerd[1954]: time="2026-01-16T18:05:22.957371822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d554d9f85-j7g4v,Uid:0aceb407-9a32-413e-8736-06bca676c39e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"922729f6fd1db1d56e3846009db869b38afebc5d3e31b7e02cb1ebd0f59f3bac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:22.958066 kubelet[3428]: E0116 18:05:22.958003 3428 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"922729f6fd1db1d56e3846009db869b38afebc5d3e31b7e02cb1ebd0f59f3bac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:22.958242 kubelet[3428]: E0116 18:05:22.958095 3428 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"922729f6fd1db1d56e3846009db869b38afebc5d3e31b7e02cb1ebd0f59f3bac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d554d9f85-j7g4v" Jan 16 18:05:22.958242 kubelet[3428]: E0116 18:05:22.958174 3428 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"922729f6fd1db1d56e3846009db869b38afebc5d3e31b7e02cb1ebd0f59f3bac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d554d9f85-j7g4v" Jan 16 18:05:22.959103 kubelet[3428]: E0116 18:05:22.958261 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d554d9f85-j7g4v_calico-system(0aceb407-9a32-413e-8736-06bca676c39e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d554d9f85-j7g4v_calico-system(0aceb407-9a32-413e-8736-06bca676c39e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"922729f6fd1db1d56e3846009db869b38afebc5d3e31b7e02cb1ebd0f59f3bac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d554d9f85-j7g4v" podUID="0aceb407-9a32-413e-8736-06bca676c39e" Jan 16 18:05:22.963638 containerd[1954]: time="2026-01-16T18:05:22.963497210Z" level=error msg="Failed to destroy network for sandbox \"11135f28aab2d67a0684854cd26b3a7341679dec9fe044773c630dbfcaef469e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:22.969064 containerd[1954]: time="2026-01-16T18:05:22.968933138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88slq,Uid:74813863-8ca6-40d9-bd92-5b37511fc2e0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"11135f28aab2d67a0684854cd26b3a7341679dec9fe044773c630dbfcaef469e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:22.969640 kubelet[3428]: E0116 18:05:22.969550 3428 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11135f28aab2d67a0684854cd26b3a7341679dec9fe044773c630dbfcaef469e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:22.969803 kubelet[3428]: E0116 18:05:22.969634 3428 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11135f28aab2d67a0684854cd26b3a7341679dec9fe044773c630dbfcaef469e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88slq" Jan 16 18:05:22.969803 kubelet[3428]: E0116 18:05:22.969670 3428 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11135f28aab2d67a0684854cd26b3a7341679dec9fe044773c630dbfcaef469e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88slq" Jan 16 18:05:22.969803 kubelet[3428]: E0116 18:05:22.969734 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-88slq_calico-system(74813863-8ca6-40d9-bd92-5b37511fc2e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-88slq_calico-system(74813863-8ca6-40d9-bd92-5b37511fc2e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11135f28aab2d67a0684854cd26b3a7341679dec9fe044773c630dbfcaef469e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-88slq" podUID="74813863-8ca6-40d9-bd92-5b37511fc2e0" Jan 16 18:05:23.009437 containerd[1954]: time="2026-01-16T18:05:23.008932258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 16 18:05:23.364207 kubelet[3428]: E0116 18:05:23.364099 3428 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 16 18:05:23.364955 kubelet[3428]: E0116 18:05:23.364276 3428 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d57dc7e-5c0c-47c3-a61e-c7073133e9a5-config-volume podName:1d57dc7e-5c0c-47c3-a61e-c7073133e9a5 nodeName:}" failed. No retries permitted until 2026-01-16 18:05:23.864243368 +0000 UTC m=+44.421942252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1d57dc7e-5c0c-47c3-a61e-c7073133e9a5-config-volume") pod "coredns-668d6bf9bc-cbrj8" (UID: "1d57dc7e-5c0c-47c3-a61e-c7073133e9a5") : failed to sync configmap cache: timed out waiting for the condition Jan 16 18:05:23.370350 kubelet[3428]: E0116 18:05:23.369294 3428 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 16 18:05:23.370350 kubelet[3428]: E0116 18:05:23.369385 3428 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73afd704-2c12-49ff-a165-c96fb49c636e-config-volume podName:73afd704-2c12-49ff-a165-c96fb49c636e nodeName:}" failed. No retries permitted until 2026-01-16 18:05:23.869359964 +0000 UTC m=+44.427058848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/73afd704-2c12-49ff-a165-c96fb49c636e-config-volume") pod "coredns-668d6bf9bc-z6xnx" (UID: "73afd704-2c12-49ff-a165-c96fb49c636e") : failed to sync configmap cache: timed out waiting for the condition Jan 16 18:05:23.490029 kubelet[3428]: E0116 18:05:23.489952 3428 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 16 18:05:23.490029 kubelet[3428]: E0116 18:05:23.490010 3428 projected.go:194] Error preparing data for projected volume kube-api-access-xkph5 for pod calico-apiserver/calico-apiserver-54df9c5477-js6cv: failed to sync configmap cache: timed out waiting for the condition Jan 16 18:05:23.491192 kubelet[3428]: E0116 18:05:23.490104 3428 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/20a749f5-b28f-4523-9135-e8877a359519-kube-api-access-xkph5 podName:20a749f5-b28f-4523-9135-e8877a359519 nodeName:}" failed. No retries permitted until 2026-01-16 18:05:23.990077168 +0000 UTC m=+44.547776052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xkph5" (UniqueName: "kubernetes.io/projected/20a749f5-b28f-4523-9135-e8877a359519-kube-api-access-xkph5") pod "calico-apiserver-54df9c5477-js6cv" (UID: "20a749f5-b28f-4523-9135-e8877a359519") : failed to sync configmap cache: timed out waiting for the condition Jan 16 18:05:23.494324 kubelet[3428]: E0116 18:05:23.494154 3428 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 16 18:05:23.494324 kubelet[3428]: E0116 18:05:23.494203 3428 projected.go:194] Error preparing data for projected volume kube-api-access-rbr88 for pod calico-apiserver/calico-apiserver-54df9c5477-tm29z: failed to sync configmap cache: timed out waiting for the condition Jan 16 18:05:23.494324 kubelet[3428]: E0116 18:05:23.494281 3428 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8adc081a-39d6-4153-ae00-f3df7e2ba175-kube-api-access-rbr88 podName:8adc081a-39d6-4153-ae00-f3df7e2ba175 nodeName:}" failed. No retries permitted until 2026-01-16 18:05:23.994255004 +0000 UTC m=+44.551953888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rbr88" (UniqueName: "kubernetes.io/projected/8adc081a-39d6-4153-ae00-f3df7e2ba175-kube-api-access-rbr88") pod "calico-apiserver-54df9c5477-tm29z" (UID: "8adc081a-39d6-4153-ae00-f3df7e2ba175") : failed to sync configmap cache: timed out waiting for the condition Jan 16 18:05:23.560778 containerd[1954]: time="2026-01-16T18:05:23.560507989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-t9x2h,Uid:3932bc10-72fd-4993-add3-bcb26a36ba2d,Namespace:calico-system,Attempt:0,}" Jan 16 18:05:23.657473 containerd[1954]: time="2026-01-16T18:05:23.657200329Z" level=error msg="Failed to destroy network for sandbox \"ca4069293c37c456fa968a9cc983a5bd753e357a4378d2bb0ff7396f025d1313\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:23.664425 containerd[1954]: time="2026-01-16T18:05:23.664361617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-t9x2h,Uid:3932bc10-72fd-4993-add3-bcb26a36ba2d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca4069293c37c456fa968a9cc983a5bd753e357a4378d2bb0ff7396f025d1313\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:23.665160 kubelet[3428]: E0116 18:05:23.665052 3428 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca4069293c37c456fa968a9cc983a5bd753e357a4378d2bb0ff7396f025d1313\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:23.665283 kubelet[3428]: E0116 18:05:23.665167 3428 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca4069293c37c456fa968a9cc983a5bd753e357a4378d2bb0ff7396f025d1313\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-t9x2h" Jan 16 18:05:23.665283 kubelet[3428]: E0116 18:05:23.665209 3428 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca4069293c37c456fa968a9cc983a5bd753e357a4378d2bb0ff7396f025d1313\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-t9x2h" Jan 16 18:05:23.667159 kubelet[3428]: E0116 18:05:23.665269 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-t9x2h_calico-system(3932bc10-72fd-4993-add3-bcb26a36ba2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-t9x2h_calico-system(3932bc10-72fd-4993-add3-bcb26a36ba2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca4069293c37c456fa968a9cc983a5bd753e357a4378d2bb0ff7396f025d1313\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-t9x2h" podUID="3932bc10-72fd-4993-add3-bcb26a36ba2d" Jan 16 18:05:24.034728 containerd[1954]: time="2026-01-16T18:05:24.034577435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z6xnx,Uid:73afd704-2c12-49ff-a165-c96fb49c636e,Namespace:kube-system,Attempt:0,}" Jan 16 18:05:24.102592 systemd[1]: run-netns-cni\x2d6e0b6955\x2d9039\x2d25d0\x2d52ba\x2d81e0cb636fb1.mount: Deactivated successfully. Jan 16 18:05:24.123681 containerd[1954]: time="2026-01-16T18:05:24.120699491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbrj8,Uid:1d57dc7e-5c0c-47c3-a61e-c7073133e9a5,Namespace:kube-system,Attempt:0,}" Jan 16 18:05:24.143829 containerd[1954]: time="2026-01-16T18:05:24.143761080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54df9c5477-js6cv,Uid:20a749f5-b28f-4523-9135-e8877a359519,Namespace:calico-apiserver,Attempt:0,}" Jan 16 18:05:24.188929 containerd[1954]: time="2026-01-16T18:05:24.188856204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54df9c5477-tm29z,Uid:8adc081a-39d6-4153-ae00-f3df7e2ba175,Namespace:calico-apiserver,Attempt:0,}" Jan 16 18:05:24.294049 containerd[1954]: time="2026-01-16T18:05:24.290831148Z" level=error msg="Failed to destroy network for sandbox \"b8bf603aff70a9f40353cc36ae0b71f743618f54f78307aa075f201fa62aa776\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:24.305520 containerd[1954]: time="2026-01-16T18:05:24.305439324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z6xnx,Uid:73afd704-2c12-49ff-a165-c96fb49c636e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8bf603aff70a9f40353cc36ae0b71f743618f54f78307aa075f201fa62aa776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:24.305988 kubelet[3428]: E0116 18:05:24.305750 3428 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8bf603aff70a9f40353cc36ae0b71f743618f54f78307aa075f201fa62aa776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:24.305988 kubelet[3428]: E0116 18:05:24.305827 3428 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8bf603aff70a9f40353cc36ae0b71f743618f54f78307aa075f201fa62aa776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z6xnx" Jan 16 18:05:24.305988 kubelet[3428]: E0116 18:05:24.305863 3428 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8bf603aff70a9f40353cc36ae0b71f743618f54f78307aa075f201fa62aa776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z6xnx" Jan 16 18:05:24.306605 kubelet[3428]: E0116 18:05:24.305923 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-z6xnx_kube-system(73afd704-2c12-49ff-a165-c96fb49c636e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-z6xnx_kube-system(73afd704-2c12-49ff-a165-c96fb49c636e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8bf603aff70a9f40353cc36ae0b71f743618f54f78307aa075f201fa62aa776\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z6xnx" podUID="73afd704-2c12-49ff-a165-c96fb49c636e" Jan 16 18:05:24.424726 containerd[1954]: time="2026-01-16T18:05:24.424652173Z" level=error msg="Failed to destroy network for sandbox \"e5f6983f6e23f2b5c140ac607b3c2f0aca40169d5a350b4206f29b0b89aedfce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:24.429937 containerd[1954]: time="2026-01-16T18:05:24.429841525Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54df9c5477-tm29z,Uid:8adc081a-39d6-4153-ae00-f3df7e2ba175,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5f6983f6e23f2b5c140ac607b3c2f0aca40169d5a350b4206f29b0b89aedfce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:24.430328 kubelet[3428]: E0116 18:05:24.430267 3428 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5f6983f6e23f2b5c140ac607b3c2f0aca40169d5a350b4206f29b0b89aedfce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:24.433868 kubelet[3428]: E0116 18:05:24.430376 3428 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5f6983f6e23f2b5c140ac607b3c2f0aca40169d5a350b4206f29b0b89aedfce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54df9c5477-tm29z" Jan 16 18:05:24.433868 kubelet[3428]: E0116 18:05:24.430435 3428 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5f6983f6e23f2b5c140ac607b3c2f0aca40169d5a350b4206f29b0b89aedfce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54df9c5477-tm29z" Jan 16 18:05:24.433868 kubelet[3428]: E0116 18:05:24.430528 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54df9c5477-tm29z_calico-apiserver(8adc081a-39d6-4153-ae00-f3df7e2ba175)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54df9c5477-tm29z_calico-apiserver(8adc081a-39d6-4153-ae00-f3df7e2ba175)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5f6983f6e23f2b5c140ac607b3c2f0aca40169d5a350b4206f29b0b89aedfce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54df9c5477-tm29z" podUID="8adc081a-39d6-4153-ae00-f3df7e2ba175" Jan 16 18:05:24.438877 containerd[1954]: time="2026-01-16T18:05:24.438803041Z" level=error msg="Failed to destroy network for sandbox \"a92101d88abd19c6b4dea7dd69af4f16ca37508840c27fd21a570a1d5607b8bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:24.443695 containerd[1954]: time="2026-01-16T18:05:24.443409541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54df9c5477-js6cv,Uid:20a749f5-b28f-4523-9135-e8877a359519,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92101d88abd19c6b4dea7dd69af4f16ca37508840c27fd21a570a1d5607b8bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:24.443988 kubelet[3428]: E0116 18:05:24.443821 3428 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92101d88abd19c6b4dea7dd69af4f16ca37508840c27fd21a570a1d5607b8bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:24.443988 kubelet[3428]: E0116 18:05:24.443924 3428 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92101d88abd19c6b4dea7dd69af4f16ca37508840c27fd21a570a1d5607b8bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54df9c5477-js6cv" Jan 16 18:05:24.444951 kubelet[3428]: E0116 18:05:24.443989 3428 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92101d88abd19c6b4dea7dd69af4f16ca37508840c27fd21a570a1d5607b8bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54df9c5477-js6cv" Jan 16 18:05:24.444951 kubelet[3428]: E0116 18:05:24.444092 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54df9c5477-js6cv_calico-apiserver(20a749f5-b28f-4523-9135-e8877a359519)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54df9c5477-js6cv_calico-apiserver(20a749f5-b28f-4523-9135-e8877a359519)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a92101d88abd19c6b4dea7dd69af4f16ca37508840c27fd21a570a1d5607b8bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54df9c5477-js6cv" podUID="20a749f5-b28f-4523-9135-e8877a359519" Jan 16 18:05:24.458896 containerd[1954]: time="2026-01-16T18:05:24.458631457Z" level=error msg="Failed to destroy network for sandbox \"e2ffdf5df298a800e4ab666e913b2db1bfa4d1693d6581f623a7ca5e19c8c120\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:24.465598 containerd[1954]: time="2026-01-16T18:05:24.465370897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbrj8,Uid:1d57dc7e-5c0c-47c3-a61e-c7073133e9a5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ffdf5df298a800e4ab666e913b2db1bfa4d1693d6581f623a7ca5e19c8c120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:24.467383 kubelet[3428]: E0116 18:05:24.467294 3428 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ffdf5df298a800e4ab666e913b2db1bfa4d1693d6581f623a7ca5e19c8c120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:05:24.467944 kubelet[3428]: E0116 18:05:24.467378 3428 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ffdf5df298a800e4ab666e913b2db1bfa4d1693d6581f623a7ca5e19c8c120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cbrj8" Jan 16 18:05:24.467944 kubelet[3428]: E0116 18:05:24.467422 3428 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ffdf5df298a800e4ab666e913b2db1bfa4d1693d6581f623a7ca5e19c8c120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cbrj8" Jan 16 18:05:24.467944 kubelet[3428]: E0116 18:05:24.467490 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cbrj8_kube-system(1d57dc7e-5c0c-47c3-a61e-c7073133e9a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cbrj8_kube-system(1d57dc7e-5c0c-47c3-a61e-c7073133e9a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2ffdf5df298a800e4ab666e913b2db1bfa4d1693d6581f623a7ca5e19c8c120\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cbrj8" podUID="1d57dc7e-5c0c-47c3-a61e-c7073133e9a5" Jan 16 18:05:25.095617 systemd[1]: run-netns-cni\x2d41b16254\x2ded10\x2d1be9\x2d6dd4\x2dad5d5e392819.mount: Deactivated successfully. Jan 16 18:05:25.095828 systemd[1]: run-netns-cni\x2d93b6f826\x2d6afc\x2d8e12\x2deb0d\x2dbea0fca57255.mount: Deactivated successfully. Jan 16 18:05:25.095951 systemd[1]: run-netns-cni\x2ddc46749a\x2d1a3e\x2d5cfe\x2d7997\x2dddf7f2c99bff.mount: Deactivated successfully. Jan 16 18:05:25.096071 systemd[1]: run-netns-cni\x2d29b72f97\x2d0055\x2d755e\x2de315\x2d9750d17dae47.mount: Deactivated successfully. Jan 16 18:05:29.485516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1340734831.mount: Deactivated successfully. Jan 16 18:05:29.542667 containerd[1954]: time="2026-01-16T18:05:29.542582550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:05:29.545174 containerd[1954]: time="2026-01-16T18:05:29.544903038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 16 18:05:29.547884 containerd[1954]: time="2026-01-16T18:05:29.547814754Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:05:29.555176 containerd[1954]: time="2026-01-16T18:05:29.554004234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:05:29.555578 containerd[1954]: time="2026-01-16T18:05:29.555391770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.5463914s" Jan 16 18:05:29.555578 containerd[1954]: time="2026-01-16T18:05:29.555443922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 16 18:05:29.587407 containerd[1954]: time="2026-01-16T18:05:29.587202787Z" level=info msg="CreateContainer within sandbox \"27e8dac55229e09e774e303070c9304127d14a5285d64087a9f514e361bd59d7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 16 18:05:29.616168 containerd[1954]: time="2026-01-16T18:05:29.615314371Z" level=info msg="Container b1b1e514ad9918e5dcdd4017885f414758a91e17a0041b1e487900dd74bbf38b: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:05:29.643813 containerd[1954]: time="2026-01-16T18:05:29.643643347Z" level=info msg="CreateContainer within sandbox \"27e8dac55229e09e774e303070c9304127d14a5285d64087a9f514e361bd59d7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b1b1e514ad9918e5dcdd4017885f414758a91e17a0041b1e487900dd74bbf38b\"" Jan 16 18:05:29.645680 containerd[1954]: time="2026-01-16T18:05:29.645610567Z" level=info msg="StartContainer for \"b1b1e514ad9918e5dcdd4017885f414758a91e17a0041b1e487900dd74bbf38b\"" Jan 16 18:05:29.650021 containerd[1954]: time="2026-01-16T18:05:29.649962175Z" level=info msg="connecting to shim b1b1e514ad9918e5dcdd4017885f414758a91e17a0041b1e487900dd74bbf38b" address="unix:///run/containerd/s/70494f1765970de2e9965b3efe6d8c870947b9ecfe91c1d62c7153ce1f247f09" protocol=ttrpc version=3 Jan 16 18:05:29.693468 systemd[1]: Started cri-containerd-b1b1e514ad9918e5dcdd4017885f414758a91e17a0041b1e487900dd74bbf38b.scope - libcontainer container b1b1e514ad9918e5dcdd4017885f414758a91e17a0041b1e487900dd74bbf38b. Jan 16 18:05:29.793449 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 16 18:05:29.793784 kernel: audit: type=1334 audit(1768586729.789:580): prog-id=175 op=LOAD Jan 16 18:05:29.793935 kernel: audit: type=1300 audit(1768586729.789:580): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3934 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:29.789000 audit: BPF prog-id=175 op=LOAD Jan 16 18:05:29.800620 kernel: audit: type=1327 audit(1768586729.789:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231623165353134616439393138653564636464343031373838356634 Jan 16 18:05:29.789000 audit[4456]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3934 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:29.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231623165353134616439393138653564636464343031373838356634 Jan 16 18:05:29.793000 audit: BPF prog-id=176 op=LOAD Jan 16 18:05:29.793000 audit[4456]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3934 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:29.815769 kernel: audit: type=1334 audit(1768586729.793:581): prog-id=176 op=LOAD Jan 16 18:05:29.815881 kernel: audit: type=1300 audit(1768586729.793:581): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3934 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:29.817897 kernel: audit: type=1327 audit(1768586729.793:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231623165353134616439393138653564636464343031373838356634 Jan 16 18:05:29.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231623165353134616439393138653564636464343031373838356634 Jan 16 18:05:29.804000 audit: BPF prog-id=176 op=UNLOAD Jan 16 18:05:29.824647 kernel: audit: type=1334 audit(1768586729.804:582): prog-id=176 op=UNLOAD Jan 16 18:05:29.824784 kernel: audit: type=1300 audit(1768586729.804:582): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:29.804000 audit[4456]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:29.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231623165353134616439393138653564636464343031373838356634 Jan 16 18:05:29.835899 kernel: audit: type=1327 audit(1768586729.804:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231623165353134616439393138653564636464343031373838356634 Jan 16 18:05:29.805000 audit: BPF prog-id=175 op=UNLOAD Jan 16 18:05:29.837643 kernel: audit: type=1334 audit(1768586729.805:583): prog-id=175 op=UNLOAD Jan 16 18:05:29.805000 audit[4456]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:29.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231623165353134616439393138653564636464343031373838356634 Jan 16 18:05:29.805000 audit: BPF prog-id=177 op=LOAD Jan 16 18:05:29.805000 audit[4456]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3934 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:29.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231623165353134616439393138653564636464343031373838356634 Jan 16 18:05:29.875386 containerd[1954]: time="2026-01-16T18:05:29.875323712Z" level=info msg="StartContainer for \"b1b1e514ad9918e5dcdd4017885f414758a91e17a0041b1e487900dd74bbf38b\" returns successfully" Jan 16 18:05:30.070817 kubelet[3428]: I0116 18:05:30.070664 3428 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8g8zx" podStartSLOduration=1.5205187470000001 podStartE2EDuration="18.070631153s" podCreationTimestamp="2026-01-16 18:05:12 +0000 UTC" firstStartedPulling="2026-01-16 18:05:13.008486436 +0000 UTC m=+33.566185320" lastFinishedPulling="2026-01-16 18:05:29.558598842 +0000 UTC m=+50.116297726" observedRunningTime="2026-01-16 18:05:30.067889369 +0000 UTC m=+50.625588277" watchObservedRunningTime="2026-01-16 18:05:30.070631153 +0000 UTC m=+50.628330037" Jan 16 18:05:30.226150 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 16 18:05:30.226262 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 16 18:05:30.629602 kubelet[3428]: I0116 18:05:30.629542 3428 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0aceb407-9a32-413e-8736-06bca676c39e-whisker-ca-bundle\") pod \"0aceb407-9a32-413e-8736-06bca676c39e\" (UID: \"0aceb407-9a32-413e-8736-06bca676c39e\") " Jan 16 18:05:30.629780 kubelet[3428]: I0116 18:05:30.629676 3428 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0aceb407-9a32-413e-8736-06bca676c39e-whisker-backend-key-pair\") pod \"0aceb407-9a32-413e-8736-06bca676c39e\" (UID: \"0aceb407-9a32-413e-8736-06bca676c39e\") " Jan 16 18:05:30.629780 kubelet[3428]: I0116 18:05:30.629743 3428 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58656\" (UniqueName: \"kubernetes.io/projected/0aceb407-9a32-413e-8736-06bca676c39e-kube-api-access-58656\") pod \"0aceb407-9a32-413e-8736-06bca676c39e\" (UID: \"0aceb407-9a32-413e-8736-06bca676c39e\") " Jan 16 18:05:30.631235 kubelet[3428]: I0116 18:05:30.630858 3428 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aceb407-9a32-413e-8736-06bca676c39e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0aceb407-9a32-413e-8736-06bca676c39e" (UID: "0aceb407-9a32-413e-8736-06bca676c39e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 16 18:05:30.644773 systemd[1]: var-lib-kubelet-pods-0aceb407\x2d9a32\x2d413e\x2d8736\x2d06bca676c39e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d58656.mount: Deactivated successfully. Jan 16 18:05:30.646658 kubelet[3428]: I0116 18:05:30.646396 3428 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aceb407-9a32-413e-8736-06bca676c39e-kube-api-access-58656" (OuterVolumeSpecName: "kube-api-access-58656") pod "0aceb407-9a32-413e-8736-06bca676c39e" (UID: "0aceb407-9a32-413e-8736-06bca676c39e"). InnerVolumeSpecName "kube-api-access-58656". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 16 18:05:30.652819 kubelet[3428]: I0116 18:05:30.652661 3428 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aceb407-9a32-413e-8736-06bca676c39e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0aceb407-9a32-413e-8736-06bca676c39e" (UID: "0aceb407-9a32-413e-8736-06bca676c39e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 16 18:05:30.655616 systemd[1]: var-lib-kubelet-pods-0aceb407\x2d9a32\x2d413e\x2d8736\x2d06bca676c39e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 16 18:05:30.731011 kubelet[3428]: I0116 18:05:30.730942 3428 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-58656\" (UniqueName: \"kubernetes.io/projected/0aceb407-9a32-413e-8736-06bca676c39e-kube-api-access-58656\") on node \"ip-172-31-22-249\" DevicePath \"\"" Jan 16 18:05:30.731297 kubelet[3428]: I0116 18:05:30.731001 3428 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0aceb407-9a32-413e-8736-06bca676c39e-whisker-ca-bundle\") on node \"ip-172-31-22-249\" DevicePath \"\"" Jan 16 18:05:30.731398 kubelet[3428]: I0116 18:05:30.731306 3428 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0aceb407-9a32-413e-8736-06bca676c39e-whisker-backend-key-pair\") on node \"ip-172-31-22-249\" DevicePath \"\"" Jan 16 18:05:31.052821 systemd[1]: Removed slice kubepods-besteffort-pod0aceb407_9a32_413e_8736_06bca676c39e.slice - libcontainer container kubepods-besteffort-pod0aceb407_9a32_413e_8736_06bca676c39e.slice. Jan 16 18:05:31.168887 systemd[1]: Created slice kubepods-besteffort-pod170a3e0a_b54a_4909_a38e_9cdfe9da4171.slice - libcontainer container kubepods-besteffort-pod170a3e0a_b54a_4909_a38e_9cdfe9da4171.slice. Jan 16 18:05:31.236804 kubelet[3428]: I0116 18:05:31.236715 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/170a3e0a-b54a-4909-a38e-9cdfe9da4171-whisker-backend-key-pair\") pod \"whisker-5c9648cbf-qbkfw\" (UID: \"170a3e0a-b54a-4909-a38e-9cdfe9da4171\") " pod="calico-system/whisker-5c9648cbf-qbkfw" Jan 16 18:05:31.238080 kubelet[3428]: I0116 18:05:31.237460 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/170a3e0a-b54a-4909-a38e-9cdfe9da4171-whisker-ca-bundle\") pod \"whisker-5c9648cbf-qbkfw\" (UID: \"170a3e0a-b54a-4909-a38e-9cdfe9da4171\") " pod="calico-system/whisker-5c9648cbf-qbkfw" Jan 16 18:05:31.238080 kubelet[3428]: I0116 18:05:31.237556 3428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g74js\" (UniqueName: \"kubernetes.io/projected/170a3e0a-b54a-4909-a38e-9cdfe9da4171-kube-api-access-g74js\") pod \"whisker-5c9648cbf-qbkfw\" (UID: \"170a3e0a-b54a-4909-a38e-9cdfe9da4171\") " pod="calico-system/whisker-5c9648cbf-qbkfw" Jan 16 18:05:31.479623 containerd[1954]: time="2026-01-16T18:05:31.479528096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c9648cbf-qbkfw,Uid:170a3e0a-b54a-4909-a38e-9cdfe9da4171,Namespace:calico-system,Attempt:0,}" Jan 16 18:05:31.743442 kubelet[3428]: I0116 18:05:31.743079 3428 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aceb407-9a32-413e-8736-06bca676c39e" path="/var/lib/kubelet/pods/0aceb407-9a32-413e-8736-06bca676c39e/volumes" Jan 16 18:05:32.398320 (udev-worker)[4517]: Network interface NamePolicy= disabled on kernel command line. Jan 16 18:05:32.403165 systemd-networkd[1870]: calibdc70b36c92: Link UP Jan 16 18:05:32.407225 systemd-networkd[1870]: calibdc70b36c92: Gained carrier Jan 16 18:05:32.486671 containerd[1954]: 2026-01-16 18:05:31.626 [INFO][4572] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 18:05:32.486671 containerd[1954]: 2026-01-16 18:05:32.059 [INFO][4572] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--249-k8s-whisker--5c9648cbf--qbkfw-eth0 whisker-5c9648cbf- calico-system 170a3e0a-b54a-4909-a38e-9cdfe9da4171 900 0 2026-01-16 18:05:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c9648cbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-22-249 whisker-5c9648cbf-qbkfw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibdc70b36c92 [] [] }} ContainerID="54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" Namespace="calico-system" Pod="whisker-5c9648cbf-qbkfw" WorkloadEndpoint="ip--172--31--22--249-k8s-whisker--5c9648cbf--qbkfw-" Jan 16 18:05:32.486671 containerd[1954]: 2026-01-16 18:05:32.059 [INFO][4572] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" Namespace="calico-system" Pod="whisker-5c9648cbf-qbkfw" WorkloadEndpoint="ip--172--31--22--249-k8s-whisker--5c9648cbf--qbkfw-eth0" Jan 16 18:05:32.486671 containerd[1954]: 2026-01-16 18:05:32.237 [INFO][4646] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" HandleID="k8s-pod-network.54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" Workload="ip--172--31--22--249-k8s-whisker--5c9648cbf--qbkfw-eth0" Jan 16 18:05:32.488151 containerd[1954]: 2026-01-16 18:05:32.239 [INFO][4646] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" HandleID="k8s-pod-network.54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" Workload="ip--172--31--22--249-k8s-whisker--5c9648cbf--qbkfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb5d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-249", "pod":"whisker-5c9648cbf-qbkfw", "timestamp":"2026-01-16 18:05:32.237545816 +0000 UTC"}, Hostname:"ip-172-31-22-249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:05:32.488151 containerd[1954]: 2026-01-16 18:05:32.239 [INFO][4646] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:05:32.488151 containerd[1954]: 2026-01-16 18:05:32.239 [INFO][4646] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:05:32.488151 containerd[1954]: 2026-01-16 18:05:32.239 [INFO][4646] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-249' Jan 16 18:05:32.488151 containerd[1954]: 2026-01-16 18:05:32.274 [INFO][4646] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" host="ip-172-31-22-249" Jan 16 18:05:32.488151 containerd[1954]: 2026-01-16 18:05:32.308 [INFO][4646] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-249" Jan 16 18:05:32.488151 containerd[1954]: 2026-01-16 18:05:32.320 [INFO][4646] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:32.488151 containerd[1954]: 2026-01-16 18:05:32.326 [INFO][4646] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:32.488151 containerd[1954]: 2026-01-16 18:05:32.330 [INFO][4646] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:32.488662 containerd[1954]: 2026-01-16 18:05:32.331 [INFO][4646] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" host="ip-172-31-22-249" Jan 16 18:05:32.488662 containerd[1954]: 2026-01-16 18:05:32.334 [INFO][4646] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13 Jan 16 18:05:32.488662 containerd[1954]: 2026-01-16 18:05:32.342 [INFO][4646] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" host="ip-172-31-22-249" Jan 16 18:05:32.488662 containerd[1954]: 2026-01-16 18:05:32.364 [INFO][4646] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.1/26] block=192.168.54.0/26 handle="k8s-pod-network.54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" host="ip-172-31-22-249" Jan 16 18:05:32.488662 containerd[1954]: 2026-01-16 18:05:32.365 [INFO][4646] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.1/26] handle="k8s-pod-network.54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" host="ip-172-31-22-249" Jan 16 18:05:32.488662 containerd[1954]: 2026-01-16 18:05:32.365 [INFO][4646] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:05:32.488662 containerd[1954]: 2026-01-16 18:05:32.365 [INFO][4646] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.1/26] IPv6=[] ContainerID="54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" HandleID="k8s-pod-network.54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" Workload="ip--172--31--22--249-k8s-whisker--5c9648cbf--qbkfw-eth0" Jan 16 18:05:32.489002 containerd[1954]: 2026-01-16 18:05:32.376 [INFO][4572] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" Namespace="calico-system" Pod="whisker-5c9648cbf-qbkfw" WorkloadEndpoint="ip--172--31--22--249-k8s-whisker--5c9648cbf--qbkfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--249-k8s-whisker--5c9648cbf--qbkfw-eth0", GenerateName:"whisker-5c9648cbf-", Namespace:"calico-system", SelfLink:"", UID:"170a3e0a-b54a-4909-a38e-9cdfe9da4171", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 5, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c9648cbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-249", ContainerID:"", Pod:"whisker-5c9648cbf-qbkfw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.54.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibdc70b36c92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:05:32.489002 containerd[1954]: 2026-01-16 18:05:32.376 [INFO][4572] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.1/32] ContainerID="54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" Namespace="calico-system" Pod="whisker-5c9648cbf-qbkfw" WorkloadEndpoint="ip--172--31--22--249-k8s-whisker--5c9648cbf--qbkfw-eth0" Jan 16 18:05:32.490984 containerd[1954]: 2026-01-16 18:05:32.376 [INFO][4572] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibdc70b36c92 ContainerID="54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" Namespace="calico-system" Pod="whisker-5c9648cbf-qbkfw" WorkloadEndpoint="ip--172--31--22--249-k8s-whisker--5c9648cbf--qbkfw-eth0" Jan 16 18:05:32.490984 containerd[1954]: 2026-01-16 18:05:32.409 [INFO][4572] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" Namespace="calico-system" Pod="whisker-5c9648cbf-qbkfw" WorkloadEndpoint="ip--172--31--22--249-k8s-whisker--5c9648cbf--qbkfw-eth0" Jan 16 18:05:32.491220 containerd[1954]: 2026-01-16 18:05:32.410 [INFO][4572] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" Namespace="calico-system" Pod="whisker-5c9648cbf-qbkfw" WorkloadEndpoint="ip--172--31--22--249-k8s-whisker--5c9648cbf--qbkfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--249-k8s-whisker--5c9648cbf--qbkfw-eth0", GenerateName:"whisker-5c9648cbf-", Namespace:"calico-system", SelfLink:"", UID:"170a3e0a-b54a-4909-a38e-9cdfe9da4171", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 5, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c9648cbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-249", ContainerID:"54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13", Pod:"whisker-5c9648cbf-qbkfw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.54.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibdc70b36c92", MAC:"3e:58:f0:74:1c:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:05:32.491460 containerd[1954]: 2026-01-16 18:05:32.478 [INFO][4572] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" Namespace="calico-system" Pod="whisker-5c9648cbf-qbkfw" WorkloadEndpoint="ip--172--31--22--249-k8s-whisker--5c9648cbf--qbkfw-eth0" Jan 16 18:05:32.580560 containerd[1954]: time="2026-01-16T18:05:32.580478001Z" level=info msg="connecting to shim 54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13" address="unix:///run/containerd/s/eee65a566addc1c8267b4c9f1c629bc0764f34e69e6cad4a6b43b09d7dd810f3" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:05:32.662930 systemd[1]: Started cri-containerd-54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13.scope - libcontainer container 54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13. Jan 16 18:05:32.791000 audit: BPF prog-id=178 op=LOAD Jan 16 18:05:32.792000 audit: BPF prog-id=179 op=LOAD Jan 16 18:05:32.792000 audit[4723]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4712 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:32.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534666636666361313062396361396630333763336638383336356135 Jan 16 18:05:32.793000 audit: BPF prog-id=179 op=UNLOAD Jan 16 18:05:32.793000 audit[4723]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4712 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:32.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534666636666361313062396361396630333763336638383336356135 Jan 16 18:05:32.795000 audit: BPF prog-id=180 op=LOAD Jan 16 18:05:32.795000 audit[4723]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4712 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:32.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534666636666361313062396361396630333763336638383336356135 Jan 16 18:05:32.795000 audit: BPF prog-id=181 op=LOAD Jan 16 18:05:32.795000 audit[4723]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4712 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:32.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534666636666361313062396361396630333763336638383336356135 Jan 16 18:05:32.796000 audit: BPF prog-id=181 op=UNLOAD Jan 16 18:05:32.796000 audit[4723]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4712 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:32.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534666636666361313062396361396630333763336638383336356135 Jan 16 18:05:32.796000 audit: BPF prog-id=180 op=UNLOAD Jan 16 18:05:32.796000 audit[4723]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4712 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:32.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534666636666361313062396361396630333763336638383336356135 Jan 16 18:05:32.797000 audit: BPF prog-id=182 op=LOAD Jan 16 18:05:32.797000 audit[4723]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4712 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:32.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534666636666361313062396361396630333763336638383336356135 Jan 16 18:05:32.891601 containerd[1954]: time="2026-01-16T18:05:32.891528191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c9648cbf-qbkfw,Uid:170a3e0a-b54a-4909-a38e-9cdfe9da4171,Namespace:calico-system,Attempt:0,} returns sandbox id \"54ff6fca10b9ca9f037c3f88365a5bec21142f4c53bfd2f19c344d01be2e8b13\"" Jan 16 18:05:32.901077 containerd[1954]: time="2026-01-16T18:05:32.900945107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 18:05:33.180084 containerd[1954]: time="2026-01-16T18:05:33.180016400Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:33.182847 containerd[1954]: time="2026-01-16T18:05:33.182726972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 18:05:33.183244 containerd[1954]: time="2026-01-16T18:05:33.182764544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:33.183618 kubelet[3428]: E0116 18:05:33.183451 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:05:33.184712 kubelet[3428]: E0116 18:05:33.183797 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:05:33.194302 kubelet[3428]: E0116 18:05:33.194096 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:90e9697f810349ed85f10d175bcdf8e8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g74js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c9648cbf-qbkfw_calico-system(170a3e0a-b54a-4909-a38e-9cdfe9da4171): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:33.198580 containerd[1954]: time="2026-01-16T18:05:33.198515949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 18:05:33.484000 audit: BPF prog-id=183 op=LOAD Jan 16 18:05:33.484000 audit[4770]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff62aa4d8 a2=98 a3=fffff62aa4c8 items=0 ppid=4594 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.484000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:05:33.485000 audit: BPF prog-id=183 op=UNLOAD Jan 16 18:05:33.485000 audit[4770]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff62aa4a8 a3=0 items=0 ppid=4594 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.485000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:05:33.486000 audit: BPF prog-id=184 op=LOAD Jan 16 18:05:33.486000 audit[4770]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff62aa388 a2=74 a3=95 items=0 ppid=4594 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.486000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:05:33.486000 audit: BPF prog-id=184 op=UNLOAD Jan 16 18:05:33.486000 audit[4770]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4594 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.486000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:05:33.488000 audit: BPF prog-id=185 op=LOAD Jan 16 18:05:33.488000 audit[4770]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff62aa3b8 a2=40 a3=fffff62aa3e8 items=0 ppid=4594 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.488000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:05:33.488000 audit: BPF prog-id=185 op=UNLOAD Jan 16 18:05:33.488000 audit[4770]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff62aa3e8 items=0 ppid=4594 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.488000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:05:33.496000 audit: BPF prog-id=186 op=LOAD Jan 16 18:05:33.496000 audit[4775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc776ff38 a2=98 a3=ffffc776ff28 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.496000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.497000 audit: BPF prog-id=186 op=UNLOAD Jan 16 18:05:33.497000 audit[4775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc776ff08 a3=0 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.497000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.497000 audit: BPF prog-id=187 op=LOAD Jan 16 18:05:33.497000 audit[4775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc776fbc8 a2=74 a3=95 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.497000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.498000 audit: BPF prog-id=187 op=UNLOAD Jan 16 18:05:33.498000 audit[4775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.498000 audit: BPF prog-id=188 op=LOAD Jan 16 18:05:33.498000 audit[4775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc776fc28 a2=94 a3=2 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.500000 audit: BPF prog-id=188 op=UNLOAD Jan 16 18:05:33.500000 audit[4775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.500000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.509080 containerd[1954]: time="2026-01-16T18:05:33.509004802Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:33.512458 containerd[1954]: time="2026-01-16T18:05:33.512365234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 18:05:33.513766 containerd[1954]: time="2026-01-16T18:05:33.512504278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:33.513885 kubelet[3428]: E0116 18:05:33.513328 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:05:33.513885 kubelet[3428]: E0116 18:05:33.513401 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:05:33.514060 kubelet[3428]: E0116 18:05:33.513591 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g74js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c9648cbf-qbkfw_calico-system(170a3e0a-b54a-4909-a38e-9cdfe9da4171): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:33.515070 kubelet[3428]: E0116 18:05:33.514992 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c9648cbf-qbkfw" podUID="170a3e0a-b54a-4909-a38e-9cdfe9da4171" Jan 16 18:05:33.705000 audit: BPF prog-id=189 op=LOAD Jan 16 18:05:33.705000 audit[4775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc776fbe8 a2=40 a3=ffffc776fc18 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.705000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.705000 audit: BPF prog-id=189 op=UNLOAD Jan 16 18:05:33.705000 audit[4775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc776fc18 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.705000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.724000 audit: BPF prog-id=190 op=LOAD Jan 16 18:05:33.724000 audit[4775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc776fbf8 a2=94 a3=4 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.724000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.724000 audit: BPF prog-id=190 op=UNLOAD Jan 16 18:05:33.724000 audit[4775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.724000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.725000 audit: BPF prog-id=191 op=LOAD Jan 16 18:05:33.725000 audit[4775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc776fa38 a2=94 a3=5 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.725000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.726000 audit: BPF prog-id=191 op=UNLOAD Jan 16 18:05:33.726000 audit[4775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.726000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.726000 audit: BPF prog-id=192 op=LOAD Jan 16 18:05:33.726000 audit[4775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc776fc68 a2=94 a3=6 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.726000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.726000 audit: BPF prog-id=192 op=UNLOAD Jan 16 18:05:33.726000 audit[4775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.726000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.727000 audit: BPF prog-id=193 op=LOAD Jan 16 18:05:33.727000 audit[4775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc776f438 a2=94 a3=83 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.727000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.728000 audit: BPF prog-id=194 op=LOAD Jan 16 18:05:33.728000 audit[4775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc776f1f8 a2=94 a3=2 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.728000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.728000 audit: BPF prog-id=194 op=UNLOAD Jan 16 18:05:33.728000 audit[4775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.728000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.728000 audit: BPF prog-id=193 op=UNLOAD Jan 16 18:05:33.728000 audit[4775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=281f9620 a3=281ecb00 items=0 ppid=4594 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.728000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:05:33.751000 audit: BPF prog-id=195 op=LOAD Jan 16 18:05:33.751000 audit[4794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc96260b8 a2=98 a3=ffffc96260a8 items=0 ppid=4594 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.751000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:05:33.751000 audit: BPF prog-id=195 op=UNLOAD Jan 16 18:05:33.751000 audit[4794]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc9626088 a3=0 items=0 ppid=4594 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.751000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:05:33.751000 audit: BPF prog-id=196 op=LOAD Jan 16 18:05:33.751000 audit[4794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc9625f68 a2=74 a3=95 items=0 ppid=4594 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.751000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:05:33.751000 audit: BPF prog-id=196 op=UNLOAD Jan 16 18:05:33.751000 audit[4794]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4594 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.751000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:05:33.751000 audit: BPF prog-id=197 op=LOAD Jan 16 18:05:33.751000 audit[4794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc9625f98 a2=40 a3=ffffc9625fc8 items=0 ppid=4594 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.751000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:05:33.753000 audit: BPF prog-id=197 op=UNLOAD Jan 16 18:05:33.753000 audit[4794]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffc9625fc8 items=0 ppid=4594 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.753000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:05:33.878288 systemd-networkd[1870]: vxlan.calico: Link UP Jan 16 18:05:33.878311 systemd-networkd[1870]: vxlan.calico: Gained carrier Jan 16 18:05:33.918662 (udev-worker)[4520]: Network interface NamePolicy= disabled on kernel command line. Jan 16 18:05:33.918000 audit: BPF prog-id=198 op=LOAD Jan 16 18:05:33.918000 audit[4823]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd7079b98 a2=98 a3=ffffd7079b88 items=0 ppid=4594 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.918000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:05:33.918000 audit: BPF prog-id=198 op=UNLOAD Jan 16 18:05:33.918000 audit[4823]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd7079b68 a3=0 items=0 ppid=4594 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.918000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:05:33.920000 audit: BPF prog-id=199 op=LOAD Jan 16 18:05:33.920000 audit[4823]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd7079878 a2=74 a3=95 items=0 ppid=4594 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.920000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:05:33.921000 audit: BPF prog-id=199 op=UNLOAD Jan 16 18:05:33.921000 audit[4823]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4594 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.921000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:05:33.921000 audit: BPF prog-id=200 op=LOAD Jan 16 18:05:33.921000 audit[4823]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd70798d8 a2=94 a3=2 items=0 ppid=4594 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.921000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:05:33.921000 audit: BPF prog-id=200 op=UNLOAD Jan 16 18:05:33.921000 audit[4823]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4594 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.921000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:05:33.921000 audit: BPF prog-id=201 op=LOAD Jan 16 18:05:33.921000 audit[4823]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd7079758 a2=40 a3=ffffd7079788 items=0 ppid=4594 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.921000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:05:33.922000 audit: BPF prog-id=201 op=UNLOAD Jan 16 18:05:33.922000 audit[4823]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffd7079788 items=0 ppid=4594 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:05:33.922000 audit: BPF prog-id=202 op=LOAD Jan 16 18:05:33.922000 audit[4823]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd70798a8 a2=94 a3=b7 items=0 ppid=4594 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:05:33.922000 audit: BPF prog-id=202 op=UNLOAD Jan 16 18:05:33.922000 audit[4823]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4594 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:05:33.925000 audit: BPF prog-id=203 op=LOAD Jan 16 18:05:33.925000 audit[4823]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd7078f58 a2=94 a3=2 items=0 ppid=4594 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.925000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:05:33.925000 audit: BPF prog-id=203 op=UNLOAD Jan 16 18:05:33.925000 audit[4823]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4594 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.925000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:05:33.925000 audit: BPF prog-id=204 op=LOAD Jan 16 18:05:33.925000 audit[4823]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd70790e8 a2=94 a3=30 items=0 ppid=4594 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.925000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:05:33.931000 audit: BPF prog-id=205 op=LOAD Jan 16 18:05:33.931000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff1711008 a2=98 a3=fffff1710ff8 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.931000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:33.932000 audit: BPF prog-id=205 op=UNLOAD Jan 16 18:05:33.932000 audit[4827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff1710fd8 a3=0 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.932000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:33.932000 audit: BPF prog-id=206 op=LOAD Jan 16 18:05:33.932000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff1710c98 a2=74 a3=95 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.932000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:33.932000 audit: BPF prog-id=206 op=UNLOAD Jan 16 18:05:33.932000 audit[4827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.932000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:33.932000 audit: BPF prog-id=207 op=LOAD Jan 16 18:05:33.932000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff1710cf8 a2=94 a3=2 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.932000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:33.932000 audit: BPF prog-id=207 op=UNLOAD Jan 16 18:05:33.932000 audit[4827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:33.932000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:34.024420 systemd-networkd[1870]: calibdc70b36c92: Gained IPv6LL Jan 16 18:05:34.061518 kubelet[3428]: E0116 18:05:34.060526 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c9648cbf-qbkfw" podUID="170a3e0a-b54a-4909-a38e-9cdfe9da4171" Jan 16 18:05:34.153000 audit[4832]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:34.153000 audit[4832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe485ef80 a2=0 a3=1 items=0 ppid=3574 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.153000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:34.159000 audit[4832]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:34.159000 audit[4832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe485ef80 a2=0 a3=1 items=0 ppid=3574 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.159000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:34.260000 audit: BPF prog-id=208 op=LOAD Jan 16 18:05:34.260000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff1710cb8 a2=40 a3=fffff1710ce8 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.260000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:34.260000 audit: BPF prog-id=208 op=UNLOAD Jan 16 18:05:34.260000 audit[4827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffff1710ce8 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.260000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:34.280000 audit: BPF prog-id=209 op=LOAD Jan 16 18:05:34.280000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff1710cc8 a2=94 a3=4 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.280000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:34.280000 audit: BPF prog-id=209 op=UNLOAD Jan 16 18:05:34.280000 audit[4827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.280000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:34.281000 audit: BPF prog-id=210 op=LOAD Jan 16 18:05:34.281000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff1710b08 a2=94 a3=5 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.281000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:34.282000 audit: BPF prog-id=210 op=UNLOAD Jan 16 18:05:34.282000 audit[4827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.282000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:34.282000 audit: BPF prog-id=211 op=LOAD Jan 16 18:05:34.282000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff1710d38 a2=94 a3=6 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.282000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:34.282000 audit: BPF prog-id=211 op=UNLOAD Jan 16 18:05:34.282000 audit[4827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.282000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:34.282000 audit: BPF prog-id=212 op=LOAD Jan 16 18:05:34.282000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff1710508 a2=94 a3=83 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.282000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:34.283000 audit: BPF prog-id=213 op=LOAD Jan 16 18:05:34.283000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffff17102c8 a2=94 a3=2 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.283000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:34.283000 audit: BPF prog-id=213 op=UNLOAD Jan 16 18:05:34.283000 audit[4827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.283000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:34.284000 audit: BPF prog-id=212 op=UNLOAD Jan 16 18:05:34.284000 audit[4827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3178a620 a3=3177db00 items=0 ppid=4594 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.284000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:05:34.290000 audit: BPF prog-id=204 op=UNLOAD Jan 16 18:05:34.290000 audit[4594]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000974a80 a2=0 a3=0 items=0 ppid=4582 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.290000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 16 18:05:34.382000 audit[4854]: NETFILTER_CFG table=mangle:127 family=2 entries=16 op=nft_register_chain pid=4854 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:05:34.382000 audit[4854]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffff6fd5b10 a2=0 a3=ffff90038fa8 items=0 ppid=4594 pid=4854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.382000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:05:34.393000 audit[4855]: NETFILTER_CFG table=nat:128 family=2 entries=15 op=nft_register_chain pid=4855 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:05:34.393000 audit[4855]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffefaa1f10 a2=0 a3=ffff8df54fa8 items=0 ppid=4594 pid=4855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.393000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:05:34.398000 audit[4853]: NETFILTER_CFG table=raw:129 family=2 entries=21 op=nft_register_chain pid=4853 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:05:34.398000 audit[4853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffde5cc320 a2=0 a3=ffff8c1aafa8 items=0 ppid=4594 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.398000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:05:34.412000 audit[4860]: NETFILTER_CFG table=filter:130 family=2 entries=94 op=nft_register_chain pid=4860 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:05:34.412000 audit[4860]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=fffff759ba50 a2=0 a3=ffffafb7cfa8 items=0 ppid=4594 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:34.412000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:05:34.735722 containerd[1954]: time="2026-01-16T18:05:34.735643356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76db965947-cmgcq,Uid:d70fefd9-029e-4ddf-8559-5e71028d4fd0,Namespace:calico-system,Attempt:0,}" Jan 16 18:05:34.973916 systemd-networkd[1870]: cali8df69e2e6ce: Link UP Jan 16 18:05:34.976633 systemd-networkd[1870]: cali8df69e2e6ce: Gained carrier Jan 16 18:05:35.010256 containerd[1954]: 2026-01-16 18:05:34.831 [INFO][4867] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--249-k8s-calico--kube--controllers--76db965947--cmgcq-eth0 calico-kube-controllers-76db965947- calico-system d70fefd9-029e-4ddf-8559-5e71028d4fd0 826 0 2026-01-16 18:05:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76db965947 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-22-249 calico-kube-controllers-76db965947-cmgcq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8df69e2e6ce [] [] }} ContainerID="039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" Namespace="calico-system" Pod="calico-kube-controllers-76db965947-cmgcq" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--kube--controllers--76db965947--cmgcq-" Jan 16 18:05:35.010256 containerd[1954]: 2026-01-16 18:05:34.832 [INFO][4867] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" Namespace="calico-system" Pod="calico-kube-controllers-76db965947-cmgcq" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--kube--controllers--76db965947--cmgcq-eth0" Jan 16 18:05:35.010256 containerd[1954]: 2026-01-16 18:05:34.897 [INFO][4880] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" HandleID="k8s-pod-network.039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" Workload="ip--172--31--22--249-k8s-calico--kube--controllers--76db965947--cmgcq-eth0" Jan 16 18:05:35.011445 containerd[1954]: 2026-01-16 18:05:34.898 [INFO][4880] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" HandleID="k8s-pod-network.039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" Workload="ip--172--31--22--249-k8s-calico--kube--controllers--76db965947--cmgcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024be40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-249", "pod":"calico-kube-controllers-76db965947-cmgcq", "timestamp":"2026-01-16 18:05:34.897939733 +0000 UTC"}, Hostname:"ip-172-31-22-249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:05:35.011445 containerd[1954]: 2026-01-16 18:05:34.898 [INFO][4880] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:05:35.011445 containerd[1954]: 2026-01-16 18:05:34.898 [INFO][4880] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:05:35.011445 containerd[1954]: 2026-01-16 18:05:34.898 [INFO][4880] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-249' Jan 16 18:05:35.011445 containerd[1954]: 2026-01-16 18:05:34.913 [INFO][4880] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" host="ip-172-31-22-249" Jan 16 18:05:35.011445 containerd[1954]: 2026-01-16 18:05:34.922 [INFO][4880] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-249" Jan 16 18:05:35.011445 containerd[1954]: 2026-01-16 18:05:34.930 [INFO][4880] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:35.011445 containerd[1954]: 2026-01-16 18:05:34.934 [INFO][4880] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:35.011445 containerd[1954]: 2026-01-16 18:05:34.938 [INFO][4880] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:35.012659 containerd[1954]: 2026-01-16 18:05:34.938 [INFO][4880] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" host="ip-172-31-22-249" Jan 16 18:05:35.012659 containerd[1954]: 2026-01-16 18:05:34.943 [INFO][4880] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954 Jan 16 18:05:35.012659 containerd[1954]: 2026-01-16 18:05:34.951 [INFO][4880] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" host="ip-172-31-22-249" Jan 16 18:05:35.012659 containerd[1954]: 2026-01-16 18:05:34.960 [INFO][4880] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.2/26] block=192.168.54.0/26 handle="k8s-pod-network.039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" host="ip-172-31-22-249" Jan 16 18:05:35.012659 containerd[1954]: 2026-01-16 18:05:34.962 [INFO][4880] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.2/26] handle="k8s-pod-network.039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" host="ip-172-31-22-249" Jan 16 18:05:35.012659 containerd[1954]: 2026-01-16 18:05:34.962 [INFO][4880] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:05:35.012659 containerd[1954]: 2026-01-16 18:05:34.962 [INFO][4880] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.2/26] IPv6=[] ContainerID="039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" HandleID="k8s-pod-network.039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" Workload="ip--172--31--22--249-k8s-calico--kube--controllers--76db965947--cmgcq-eth0" Jan 16 18:05:35.013723 containerd[1954]: 2026-01-16 18:05:34.966 [INFO][4867] cni-plugin/k8s.go 418: Populated endpoint ContainerID="039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" Namespace="calico-system" Pod="calico-kube-controllers-76db965947-cmgcq" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--kube--controllers--76db965947--cmgcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--249-k8s-calico--kube--controllers--76db965947--cmgcq-eth0", GenerateName:"calico-kube-controllers-76db965947-", Namespace:"calico-system", SelfLink:"", UID:"d70fefd9-029e-4ddf-8559-5e71028d4fd0", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 5, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76db965947", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-249", ContainerID:"", Pod:"calico-kube-controllers-76db965947-cmgcq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8df69e2e6ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:05:35.014399 containerd[1954]: 2026-01-16 18:05:34.967 [INFO][4867] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.2/32] ContainerID="039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" Namespace="calico-system" Pod="calico-kube-controllers-76db965947-cmgcq" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--kube--controllers--76db965947--cmgcq-eth0" Jan 16 18:05:35.014399 containerd[1954]: 2026-01-16 18:05:34.967 [INFO][4867] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8df69e2e6ce ContainerID="039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" Namespace="calico-system" Pod="calico-kube-controllers-76db965947-cmgcq" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--kube--controllers--76db965947--cmgcq-eth0" Jan 16 18:05:35.014399 containerd[1954]: 2026-01-16 18:05:34.978 [INFO][4867] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" Namespace="calico-system" Pod="calico-kube-controllers-76db965947-cmgcq" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--kube--controllers--76db965947--cmgcq-eth0" Jan 16 18:05:35.014764 containerd[1954]: 2026-01-16 18:05:34.979 [INFO][4867] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" Namespace="calico-system" Pod="calico-kube-controllers-76db965947-cmgcq" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--kube--controllers--76db965947--cmgcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--249-k8s-calico--kube--controllers--76db965947--cmgcq-eth0", GenerateName:"calico-kube-controllers-76db965947-", Namespace:"calico-system", SelfLink:"", UID:"d70fefd9-029e-4ddf-8559-5e71028d4fd0", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 5, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76db965947", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-249", ContainerID:"039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954", Pod:"calico-kube-controllers-76db965947-cmgcq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8df69e2e6ce", MAC:"66:51:d4:8d:b3:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:05:35.015244 containerd[1954]: 2026-01-16 18:05:35.002 [INFO][4867] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" Namespace="calico-system" Pod="calico-kube-controllers-76db965947-cmgcq" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--kube--controllers--76db965947--cmgcq-eth0" Jan 16 18:05:35.048390 systemd-networkd[1870]: vxlan.calico: Gained IPv6LL Jan 16 18:05:35.055000 audit[4896]: NETFILTER_CFG table=filter:131 family=2 entries=36 op=nft_register_chain pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:05:35.060546 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 16 18:05:35.060636 kernel: audit: type=1325 audit(1768586735.055:661): table=filter:131 family=2 entries=36 op=nft_register_chain pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:05:35.066991 kernel: audit: type=1300 audit(1768586735.055:661): arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffc945e070 a2=0 a3=ffffb9a06fa8 items=0 ppid=4594 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:35.055000 audit[4896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffc945e070 a2=0 a3=ffffb9a06fa8 items=0 ppid=4594 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:35.055000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:05:35.081217 kernel: audit: type=1327 audit(1768586735.055:661): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:05:35.090839 containerd[1954]: time="2026-01-16T18:05:35.090513190Z" level=info msg="connecting to shim 039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954" address="unix:///run/containerd/s/39d19ebee35367f7ee7aca9e3534b3a5abed84c070b6759605c64619cd4cf686" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:05:35.150480 systemd[1]: Started cri-containerd-039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954.scope - libcontainer container 039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954. Jan 16 18:05:35.185000 audit: BPF prog-id=214 op=LOAD Jan 16 18:05:35.189159 kernel: audit: type=1334 audit(1768586735.185:662): prog-id=214 op=LOAD Jan 16 18:05:35.189801 kernel: audit: type=1334 audit(1768586735.188:663): prog-id=215 op=LOAD Jan 16 18:05:35.188000 audit: BPF prog-id=215 op=LOAD Jan 16 18:05:35.188000 audit[4916]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4905 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:35.197856 kernel: audit: type=1300 audit(1768586735.188:663): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4905 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:35.198816 kernel: audit: type=1327 audit(1768586735.188:663): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393936386263653164646264303461633539656532343231326437 Jan 16 18:05:35.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393936386263653164646264303461633539656532343231326437 Jan 16 18:05:35.188000 audit: BPF prog-id=215 op=UNLOAD Jan 16 18:05:35.212274 kernel: audit: type=1334 audit(1768586735.188:664): prog-id=215 op=UNLOAD Jan 16 18:05:35.212384 kernel: audit: type=1300 audit(1768586735.188:664): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4905 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:35.212434 kernel: audit: type=1327 audit(1768586735.188:664): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393936386263653164646264303461633539656532343231326437 Jan 16 18:05:35.188000 audit[4916]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4905 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:35.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393936386263653164646264303461633539656532343231326437 Jan 16 18:05:35.188000 audit: BPF prog-id=216 op=LOAD Jan 16 18:05:35.188000 audit[4916]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4905 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:35.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393936386263653164646264303461633539656532343231326437 Jan 16 18:05:35.197000 audit: BPF prog-id=217 op=LOAD Jan 16 18:05:35.197000 audit[4916]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4905 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:35.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393936386263653164646264303461633539656532343231326437 Jan 16 18:05:35.203000 audit: BPF prog-id=217 op=UNLOAD Jan 16 18:05:35.203000 audit[4916]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4905 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:35.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393936386263653164646264303461633539656532343231326437 Jan 16 18:05:35.203000 audit: BPF prog-id=216 op=UNLOAD Jan 16 18:05:35.203000 audit[4916]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4905 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:35.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393936386263653164646264303461633539656532343231326437 Jan 16 18:05:35.203000 audit: BPF prog-id=218 op=LOAD Jan 16 18:05:35.203000 audit[4916]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4905 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:35.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393936386263653164646264303461633539656532343231326437 Jan 16 18:05:35.269868 containerd[1954]: time="2026-01-16T18:05:35.269717159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76db965947-cmgcq,Uid:d70fefd9-029e-4ddf-8559-5e71028d4fd0,Namespace:calico-system,Attempt:0,} returns sandbox id \"039968bce1ddbd04ac59ee24212d77802184153949703e271d632670fa2dc954\"" Jan 16 18:05:35.276370 containerd[1954]: time="2026-01-16T18:05:35.276309983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 18:05:35.552248 containerd[1954]: time="2026-01-16T18:05:35.552038028Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:35.555191 containerd[1954]: time="2026-01-16T18:05:35.554822844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 18:05:35.555191 containerd[1954]: time="2026-01-16T18:05:35.554899092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:35.556093 kubelet[3428]: E0116 18:05:35.556031 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:05:35.558279 kubelet[3428]: E0116 18:05:35.556272 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:05:35.558279 kubelet[3428]: E0116 18:05:35.556821 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hgjq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76db965947-cmgcq_calico-system(d70fefd9-029e-4ddf-8559-5e71028d4fd0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:35.559009 kubelet[3428]: E0116 18:05:35.558838 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76db965947-cmgcq" podUID="d70fefd9-029e-4ddf-8559-5e71028d4fd0" Jan 16 18:05:35.736204 containerd[1954]: time="2026-01-16T18:05:35.735719689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88slq,Uid:74813863-8ca6-40d9-bd92-5b37511fc2e0,Namespace:calico-system,Attempt:0,}" Jan 16 18:05:35.737156 containerd[1954]: time="2026-01-16T18:05:35.736337065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbrj8,Uid:1d57dc7e-5c0c-47c3-a61e-c7073133e9a5,Namespace:kube-system,Attempt:0,}" Jan 16 18:05:35.737156 containerd[1954]: time="2026-01-16T18:05:35.736483981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-t9x2h,Uid:3932bc10-72fd-4993-add3-bcb26a36ba2d,Namespace:calico-system,Attempt:0,}" Jan 16 18:05:36.085890 kubelet[3428]: E0116 18:05:36.085496 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76db965947-cmgcq" podUID="d70fefd9-029e-4ddf-8559-5e71028d4fd0" Jan 16 18:05:36.157408 systemd-networkd[1870]: cali7885ee761de: Link UP Jan 16 18:05:36.157861 systemd-networkd[1870]: cali7885ee761de: Gained carrier Jan 16 18:05:36.197715 containerd[1954]: 2026-01-16 18:05:35.943 [INFO][4944] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--249-k8s-coredns--668d6bf9bc--cbrj8-eth0 coredns-668d6bf9bc- kube-system 1d57dc7e-5c0c-47c3-a61e-c7073133e9a5 831 0 2026-01-16 18:04:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-249 coredns-668d6bf9bc-cbrj8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7885ee761de [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbrj8" WorkloadEndpoint="ip--172--31--22--249-k8s-coredns--668d6bf9bc--cbrj8-" Jan 16 18:05:36.197715 containerd[1954]: 2026-01-16 18:05:35.943 [INFO][4944] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbrj8" WorkloadEndpoint="ip--172--31--22--249-k8s-coredns--668d6bf9bc--cbrj8-eth0" Jan 16 18:05:36.197715 containerd[1954]: 2026-01-16 18:05:36.053 [INFO][4984] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" HandleID="k8s-pod-network.5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" Workload="ip--172--31--22--249-k8s-coredns--668d6bf9bc--cbrj8-eth0" Jan 16 18:05:36.198046 containerd[1954]: 2026-01-16 18:05:36.053 [INFO][4984] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" HandleID="k8s-pod-network.5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" Workload="ip--172--31--22--249-k8s-coredns--668d6bf9bc--cbrj8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000120380), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-249", "pod":"coredns-668d6bf9bc-cbrj8", "timestamp":"2026-01-16 18:05:36.053268143 +0000 UTC"}, Hostname:"ip-172-31-22-249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:05:36.198046 containerd[1954]: 2026-01-16 18:05:36.053 [INFO][4984] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:05:36.198046 containerd[1954]: 2026-01-16 18:05:36.053 [INFO][4984] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:05:36.198046 containerd[1954]: 2026-01-16 18:05:36.053 [INFO][4984] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-249' Jan 16 18:05:36.198046 containerd[1954]: 2026-01-16 18:05:36.080 [INFO][4984] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" host="ip-172-31-22-249" Jan 16 18:05:36.198046 containerd[1954]: 2026-01-16 18:05:36.094 [INFO][4984] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-249" Jan 16 18:05:36.198046 containerd[1954]: 2026-01-16 18:05:36.104 [INFO][4984] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:36.198046 containerd[1954]: 2026-01-16 18:05:36.110 [INFO][4984] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:36.198046 containerd[1954]: 2026-01-16 18:05:36.118 [INFO][4984] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:36.198768 containerd[1954]: 2026-01-16 18:05:36.118 [INFO][4984] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" host="ip-172-31-22-249" Jan 16 18:05:36.198768 containerd[1954]: 2026-01-16 18:05:36.125 [INFO][4984] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc Jan 16 18:05:36.198768 containerd[1954]: 2026-01-16 18:05:36.132 [INFO][4984] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" host="ip-172-31-22-249" Jan 16 18:05:36.198768 containerd[1954]: 2026-01-16 18:05:36.144 [INFO][4984] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.3/26] block=192.168.54.0/26 handle="k8s-pod-network.5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" host="ip-172-31-22-249" Jan 16 18:05:36.198768 containerd[1954]: 2026-01-16 18:05:36.144 [INFO][4984] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.3/26] handle="k8s-pod-network.5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" host="ip-172-31-22-249" Jan 16 18:05:36.198768 containerd[1954]: 2026-01-16 18:05:36.144 [INFO][4984] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:05:36.198768 containerd[1954]: 2026-01-16 18:05:36.144 [INFO][4984] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.3/26] IPv6=[] ContainerID="5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" HandleID="k8s-pod-network.5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" Workload="ip--172--31--22--249-k8s-coredns--668d6bf9bc--cbrj8-eth0" Jan 16 18:05:36.199177 containerd[1954]: 2026-01-16 18:05:36.150 [INFO][4944] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbrj8" WorkloadEndpoint="ip--172--31--22--249-k8s-coredns--668d6bf9bc--cbrj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--249-k8s-coredns--668d6bf9bc--cbrj8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1d57dc7e-5c0c-47c3-a61e-c7073133e9a5", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 4, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-249", ContainerID:"", Pod:"coredns-668d6bf9bc-cbrj8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7885ee761de", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:05:36.199177 containerd[1954]: 2026-01-16 18:05:36.150 [INFO][4944] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.3/32] ContainerID="5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbrj8" WorkloadEndpoint="ip--172--31--22--249-k8s-coredns--668d6bf9bc--cbrj8-eth0" Jan 16 18:05:36.199177 containerd[1954]: 2026-01-16 18:05:36.150 [INFO][4944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7885ee761de ContainerID="5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbrj8" WorkloadEndpoint="ip--172--31--22--249-k8s-coredns--668d6bf9bc--cbrj8-eth0" Jan 16 18:05:36.199177 containerd[1954]: 2026-01-16 18:05:36.156 [INFO][4944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbrj8" WorkloadEndpoint="ip--172--31--22--249-k8s-coredns--668d6bf9bc--cbrj8-eth0" Jan 16 18:05:36.199177 containerd[1954]: 2026-01-16 18:05:36.157 [INFO][4944] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbrj8" WorkloadEndpoint="ip--172--31--22--249-k8s-coredns--668d6bf9bc--cbrj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--249-k8s-coredns--668d6bf9bc--cbrj8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1d57dc7e-5c0c-47c3-a61e-c7073133e9a5", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 4, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-249", ContainerID:"5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc", Pod:"coredns-668d6bf9bc-cbrj8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7885ee761de", MAC:"5e:7b:1c:b0:e8:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:05:36.199177 containerd[1954]: 2026-01-16 18:05:36.190 [INFO][4944] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbrj8" WorkloadEndpoint="ip--172--31--22--249-k8s-coredns--668d6bf9bc--cbrj8-eth0" Jan 16 18:05:36.288625 containerd[1954]: time="2026-01-16T18:05:36.288255360Z" level=info msg="connecting to shim 5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc" address="unix:///run/containerd/s/f8313e4737ff710114cae581fc3ffe237bce46485d190ed499eea0b6bf7a449b" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:05:36.307772 systemd-networkd[1870]: calib500d6e3d92: Link UP Jan 16 18:05:36.311232 systemd-networkd[1870]: calib500d6e3d92: Gained carrier Jan 16 18:05:36.316000 audit[5021]: NETFILTER_CFG table=filter:132 family=2 entries=46 op=nft_register_chain pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:05:36.316000 audit[5021]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23740 a0=3 a1=ffffde86d930 a2=0 a3=ffffae5cefa8 items=0 ppid=4594 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.316000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:35.965 [INFO][4962] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--249-k8s-goldmane--666569f655--t9x2h-eth0 goldmane-666569f655- calico-system 3932bc10-72fd-4993-add3-bcb26a36ba2d 828 0 2026-01-16 18:05:07 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-22-249 goldmane-666569f655-t9x2h eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib500d6e3d92 [] [] }} ContainerID="fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" Namespace="calico-system" Pod="goldmane-666569f655-t9x2h" WorkloadEndpoint="ip--172--31--22--249-k8s-goldmane--666569f655--t9x2h-" Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:35.965 [INFO][4962] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" Namespace="calico-system" Pod="goldmane-666569f655-t9x2h" WorkloadEndpoint="ip--172--31--22--249-k8s-goldmane--666569f655--t9x2h-eth0" Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.078 [INFO][4989] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" HandleID="k8s-pod-network.fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" Workload="ip--172--31--22--249-k8s-goldmane--666569f655--t9x2h-eth0" Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.080 [INFO][4989] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" HandleID="k8s-pod-network.fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" Workload="ip--172--31--22--249-k8s-goldmane--666569f655--t9x2h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000379580), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-249", "pod":"goldmane-666569f655-t9x2h", "timestamp":"2026-01-16 18:05:36.078205907 +0000 UTC"}, Hostname:"ip-172-31-22-249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.081 [INFO][4989] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.144 [INFO][4989] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.144 [INFO][4989] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-249' Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.178 [INFO][4989] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" host="ip-172-31-22-249" Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.196 [INFO][4989] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-249" Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.210 [INFO][4989] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.215 [INFO][4989] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.224 [INFO][4989] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.224 [INFO][4989] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" host="ip-172-31-22-249" Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.230 [INFO][4989] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528 Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.241 [INFO][4989] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" host="ip-172-31-22-249" Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.265 [INFO][4989] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.4/26] block=192.168.54.0/26 handle="k8s-pod-network.fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" host="ip-172-31-22-249" Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.265 [INFO][4989] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.4/26] handle="k8s-pod-network.fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" host="ip-172-31-22-249" Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.265 [INFO][4989] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:05:36.378160 containerd[1954]: 2026-01-16 18:05:36.265 [INFO][4989] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.4/26] IPv6=[] ContainerID="fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" HandleID="k8s-pod-network.fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" Workload="ip--172--31--22--249-k8s-goldmane--666569f655--t9x2h-eth0" Jan 16 18:05:36.380992 containerd[1954]: 2026-01-16 18:05:36.284 [INFO][4962] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" Namespace="calico-system" Pod="goldmane-666569f655-t9x2h" WorkloadEndpoint="ip--172--31--22--249-k8s-goldmane--666569f655--t9x2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--249-k8s-goldmane--666569f655--t9x2h-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3932bc10-72fd-4993-add3-bcb26a36ba2d", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 5, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-249", ContainerID:"", Pod:"goldmane-666569f655-t9x2h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.54.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib500d6e3d92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:05:36.380992 containerd[1954]: 2026-01-16 18:05:36.285 [INFO][4962] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.4/32] ContainerID="fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" Namespace="calico-system" Pod="goldmane-666569f655-t9x2h" WorkloadEndpoint="ip--172--31--22--249-k8s-goldmane--666569f655--t9x2h-eth0" Jan 16 18:05:36.380992 containerd[1954]: 2026-01-16 18:05:36.285 [INFO][4962] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib500d6e3d92 ContainerID="fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" Namespace="calico-system" Pod="goldmane-666569f655-t9x2h" WorkloadEndpoint="ip--172--31--22--249-k8s-goldmane--666569f655--t9x2h-eth0" Jan 16 18:05:36.380992 containerd[1954]: 2026-01-16 18:05:36.308 [INFO][4962] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" Namespace="calico-system" Pod="goldmane-666569f655-t9x2h" WorkloadEndpoint="ip--172--31--22--249-k8s-goldmane--666569f655--t9x2h-eth0" Jan 16 18:05:36.380992 containerd[1954]: 2026-01-16 18:05:36.311 [INFO][4962] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" Namespace="calico-system" Pod="goldmane-666569f655-t9x2h" WorkloadEndpoint="ip--172--31--22--249-k8s-goldmane--666569f655--t9x2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--249-k8s-goldmane--666569f655--t9x2h-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3932bc10-72fd-4993-add3-bcb26a36ba2d", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 5, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-249", ContainerID:"fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528", Pod:"goldmane-666569f655-t9x2h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.54.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib500d6e3d92", MAC:"ba:f0:94:2c:3d:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:05:36.380992 containerd[1954]: 2026-01-16 18:05:36.367 [INFO][4962] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" Namespace="calico-system" Pod="goldmane-666569f655-t9x2h" WorkloadEndpoint="ip--172--31--22--249-k8s-goldmane--666569f655--t9x2h-eth0" Jan 16 18:05:36.421797 systemd[1]: Started cri-containerd-5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc.scope - libcontainer container 5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc. Jan 16 18:05:36.464000 audit: BPF prog-id=219 op=LOAD Jan 16 18:05:36.466000 audit: BPF prog-id=220 op=LOAD Jan 16 18:05:36.466000 audit[5035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=5020 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561353037396131646333383534383163666430363136656232643330 Jan 16 18:05:36.466000 audit: BPF prog-id=220 op=UNLOAD Jan 16 18:05:36.466000 audit[5035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5020 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561353037396131646333383534383163666430363136656232643330 Jan 16 18:05:36.467000 audit: BPF prog-id=221 op=LOAD Jan 16 18:05:36.467000 audit[5035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=5020 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561353037396131646333383534383163666430363136656232643330 Jan 16 18:05:36.467000 audit: BPF prog-id=222 op=LOAD Jan 16 18:05:36.467000 audit[5035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=5020 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561353037396131646333383534383163666430363136656232643330 Jan 16 18:05:36.467000 audit: BPF prog-id=222 op=UNLOAD Jan 16 18:05:36.467000 audit[5035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5020 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561353037396131646333383534383163666430363136656232643330 Jan 16 18:05:36.467000 audit: BPF prog-id=221 op=UNLOAD Jan 16 18:05:36.467000 audit[5035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5020 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561353037396131646333383534383163666430363136656232643330 Jan 16 18:05:36.477634 containerd[1954]: time="2026-01-16T18:05:36.477031381Z" level=info msg="connecting to shim fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528" address="unix:///run/containerd/s/5b067773de20cbcf8c3adce5e65b0a540c5652764d53c436d1ce95d94b5aadb2" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:05:36.467000 audit: BPF prog-id=223 op=LOAD Jan 16 18:05:36.467000 audit[5035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=5020 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561353037396131646333383534383163666430363136656232643330 Jan 16 18:05:36.512277 systemd-networkd[1870]: cali9124d392eb1: Link UP Jan 16 18:05:36.514694 systemd-networkd[1870]: cali9124d392eb1: Gained carrier Jan 16 18:05:36.512000 audit[5085]: NETFILTER_CFG table=filter:133 family=2 entries=58 op=nft_register_chain pid=5085 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:05:36.512000 audit[5085]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30408 a0=3 a1=ffffcf2d9b60 a2=0 a3=ffff85487fa8 items=0 ppid=4594 pid=5085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.512000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:35.985 [INFO][4953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--249-k8s-csi--node--driver--88slq-eth0 csi-node-driver- calico-system 74813863-8ca6-40d9-bd92-5b37511fc2e0 726 0 2026-01-16 18:05:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-22-249 csi-node-driver-88slq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9124d392eb1 [] [] }} ContainerID="757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" Namespace="calico-system" Pod="csi-node-driver-88slq" WorkloadEndpoint="ip--172--31--22--249-k8s-csi--node--driver--88slq-" Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:35.985 [INFO][4953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" Namespace="calico-system" Pod="csi-node-driver-88slq" WorkloadEndpoint="ip--172--31--22--249-k8s-csi--node--driver--88slq-eth0" Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.092 [INFO][4994] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" HandleID="k8s-pod-network.757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" Workload="ip--172--31--22--249-k8s-csi--node--driver--88slq-eth0" Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.094 [INFO][4994] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" HandleID="k8s-pod-network.757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" Workload="ip--172--31--22--249-k8s-csi--node--driver--88slq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030a520), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-249", "pod":"csi-node-driver-88slq", "timestamp":"2026-01-16 18:05:36.092920019 +0000 UTC"}, Hostname:"ip-172-31-22-249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.096 [INFO][4994] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.265 [INFO][4994] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.271 [INFO][4994] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-249' Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.325 [INFO][4994] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" host="ip-172-31-22-249" Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.355 [INFO][4994] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-249" Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.407 [INFO][4994] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.417 [INFO][4994] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.428 [INFO][4994] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.428 [INFO][4994] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" host="ip-172-31-22-249" Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.435 [INFO][4994] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9 Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.448 [INFO][4994] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" host="ip-172-31-22-249" Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.465 [INFO][4994] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.5/26] block=192.168.54.0/26 handle="k8s-pod-network.757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" host="ip-172-31-22-249" Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.465 [INFO][4994] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.5/26] handle="k8s-pod-network.757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" host="ip-172-31-22-249" Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.465 [INFO][4994] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:05:36.562563 containerd[1954]: 2026-01-16 18:05:36.465 [INFO][4994] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.5/26] IPv6=[] ContainerID="757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" HandleID="k8s-pod-network.757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" Workload="ip--172--31--22--249-k8s-csi--node--driver--88slq-eth0" Jan 16 18:05:36.566100 containerd[1954]: 2026-01-16 18:05:36.497 [INFO][4953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" Namespace="calico-system" Pod="csi-node-driver-88slq" WorkloadEndpoint="ip--172--31--22--249-k8s-csi--node--driver--88slq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--249-k8s-csi--node--driver--88slq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"74813863-8ca6-40d9-bd92-5b37511fc2e0", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 5, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-249", ContainerID:"", Pod:"csi-node-driver-88slq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9124d392eb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:05:36.566100 containerd[1954]: 2026-01-16 18:05:36.498 [INFO][4953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.5/32] ContainerID="757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" Namespace="calico-system" Pod="csi-node-driver-88slq" WorkloadEndpoint="ip--172--31--22--249-k8s-csi--node--driver--88slq-eth0" Jan 16 18:05:36.566100 containerd[1954]: 2026-01-16 18:05:36.498 [INFO][4953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9124d392eb1 ContainerID="757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" Namespace="calico-system" Pod="csi-node-driver-88slq" WorkloadEndpoint="ip--172--31--22--249-k8s-csi--node--driver--88slq-eth0" Jan 16 18:05:36.566100 containerd[1954]: 2026-01-16 18:05:36.516 [INFO][4953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" Namespace="calico-system" Pod="csi-node-driver-88slq" WorkloadEndpoint="ip--172--31--22--249-k8s-csi--node--driver--88slq-eth0" Jan 16 18:05:36.566100 containerd[1954]: 2026-01-16 18:05:36.520 [INFO][4953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" Namespace="calico-system" Pod="csi-node-driver-88slq" WorkloadEndpoint="ip--172--31--22--249-k8s-csi--node--driver--88slq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--249-k8s-csi--node--driver--88slq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"74813863-8ca6-40d9-bd92-5b37511fc2e0", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 5, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-249", ContainerID:"757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9", Pod:"csi-node-driver-88slq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9124d392eb1", MAC:"16:f6:1a:d3:d5:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:05:36.566100 containerd[1954]: 2026-01-16 18:05:36.551 [INFO][4953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" Namespace="calico-system" Pod="csi-node-driver-88slq" WorkloadEndpoint="ip--172--31--22--249-k8s-csi--node--driver--88slq-eth0" Jan 16 18:05:36.573833 systemd[1]: Started cri-containerd-fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528.scope - libcontainer container fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528. Jan 16 18:05:36.584689 systemd-networkd[1870]: cali8df69e2e6ce: Gained IPv6LL Jan 16 18:05:36.640835 containerd[1954]: time="2026-01-16T18:05:36.640677374Z" level=info msg="connecting to shim 757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9" address="unix:///run/containerd/s/0f8daaffeb7d6bcf356b80c275f1ec524e4a973827061be90cf08a6ffed861a6" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:05:36.663000 audit[5128]: NETFILTER_CFG table=filter:134 family=2 entries=44 op=nft_register_chain pid=5128 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:05:36.663000 audit[5128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21936 a0=3 a1=ffffe4d3f360 a2=0 a3=ffff98511fa8 items=0 ppid=4594 pid=5128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.663000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:05:36.666443 containerd[1954]: time="2026-01-16T18:05:36.666368654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbrj8,Uid:1d57dc7e-5c0c-47c3-a61e-c7073133e9a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc\"" Jan 16 18:05:36.677763 containerd[1954]: time="2026-01-16T18:05:36.677346794Z" level=info msg="CreateContainer within sandbox \"5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 18:05:36.687000 audit: BPF prog-id=224 op=LOAD Jan 16 18:05:36.689000 audit: BPF prog-id=225 op=LOAD Jan 16 18:05:36.689000 audit[5081]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000206180 a2=98 a3=0 items=0 ppid=5070 pid=5081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665633462396334336234356534633832393736343430303131666366 Jan 16 18:05:36.689000 audit: BPF prog-id=225 op=UNLOAD Jan 16 18:05:36.689000 audit[5081]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5070 pid=5081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665633462396334336234356534633832393736343430303131666366 Jan 16 18:05:36.690000 audit: BPF prog-id=226 op=LOAD Jan 16 18:05:36.690000 audit[5081]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002063e8 a2=98 a3=0 items=0 ppid=5070 pid=5081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665633462396334336234356534633832393736343430303131666366 Jan 16 18:05:36.692000 audit: BPF prog-id=227 op=LOAD Jan 16 18:05:36.692000 audit[5081]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000206168 a2=98 a3=0 items=0 ppid=5070 pid=5081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665633462396334336234356534633832393736343430303131666366 Jan 16 18:05:36.693000 audit: BPF prog-id=227 op=UNLOAD Jan 16 18:05:36.693000 audit[5081]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5070 pid=5081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665633462396334336234356534633832393736343430303131666366 Jan 16 18:05:36.693000 audit: BPF prog-id=226 op=UNLOAD Jan 16 18:05:36.693000 audit[5081]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5070 pid=5081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665633462396334336234356534633832393736343430303131666366 Jan 16 18:05:36.694000 audit: BPF prog-id=228 op=LOAD Jan 16 18:05:36.694000 audit[5081]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000206648 a2=98 a3=0 items=0 ppid=5070 pid=5081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665633462396334336234356534633832393736343430303131666366 Jan 16 18:05:36.714974 containerd[1954]: time="2026-01-16T18:05:36.714903794Z" level=info msg="Container cfbed579dd93eea3cca2b484cd8408faf6d6d2649b408791c4a870fc46e20fa0: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:05:36.717700 systemd[1]: Started cri-containerd-757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9.scope - libcontainer container 757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9. Jan 16 18:05:36.730912 containerd[1954]: time="2026-01-16T18:05:36.730840514Z" level=info msg="CreateContainer within sandbox \"5a5079a1dc385481cfd0616eb2d30ab4358009195f3ea3a2a93ed2cefeac8fcc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cfbed579dd93eea3cca2b484cd8408faf6d6d2649b408791c4a870fc46e20fa0\"" Jan 16 18:05:36.732882 containerd[1954]: time="2026-01-16T18:05:36.732821666Z" level=info msg="StartContainer for \"cfbed579dd93eea3cca2b484cd8408faf6d6d2649b408791c4a870fc46e20fa0\"" Jan 16 18:05:36.735197 containerd[1954]: time="2026-01-16T18:05:36.734897990Z" level=info msg="connecting to shim cfbed579dd93eea3cca2b484cd8408faf6d6d2649b408791c4a870fc46e20fa0" address="unix:///run/containerd/s/f8313e4737ff710114cae581fc3ffe237bce46485d190ed499eea0b6bf7a449b" protocol=ttrpc version=3 Jan 16 18:05:36.735908 containerd[1954]: time="2026-01-16T18:05:36.735778610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54df9c5477-js6cv,Uid:20a749f5-b28f-4523-9135-e8877a359519,Namespace:calico-apiserver,Attempt:0,}" Jan 16 18:05:36.817000 audit: BPF prog-id=229 op=LOAD Jan 16 18:05:36.820000 audit: BPF prog-id=230 op=LOAD Jan 16 18:05:36.820000 audit[5135]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5122 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735376165346661323335333766393264653462636339626438663366 Jan 16 18:05:36.820000 audit: BPF prog-id=230 op=UNLOAD Jan 16 18:05:36.820000 audit[5135]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5122 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735376165346661323335333766393264653462636339626438663366 Jan 16 18:05:36.820000 audit: BPF prog-id=231 op=LOAD Jan 16 18:05:36.820000 audit[5135]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5122 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735376165346661323335333766393264653462636339626438663366 Jan 16 18:05:36.820000 audit: BPF prog-id=232 op=LOAD Jan 16 18:05:36.820000 audit[5135]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5122 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735376165346661323335333766393264653462636339626438663366 Jan 16 18:05:36.822000 audit: BPF prog-id=232 op=UNLOAD Jan 16 18:05:36.822000 audit[5135]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5122 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735376165346661323335333766393264653462636339626438663366 Jan 16 18:05:36.822000 audit: BPF prog-id=231 op=UNLOAD Jan 16 18:05:36.822000 audit[5135]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5122 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735376165346661323335333766393264653462636339626438663366 Jan 16 18:05:36.822000 audit: BPF prog-id=233 op=LOAD Jan 16 18:05:36.822000 audit[5135]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5122 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735376165346661323335333766393264653462636339626438663366 Jan 16 18:05:36.846538 systemd[1]: Started cri-containerd-cfbed579dd93eea3cca2b484cd8408faf6d6d2649b408791c4a870fc46e20fa0.scope - libcontainer container cfbed579dd93eea3cca2b484cd8408faf6d6d2649b408791c4a870fc46e20fa0. Jan 16 18:05:36.903474 containerd[1954]: time="2026-01-16T18:05:36.903196551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-t9x2h,Uid:3932bc10-72fd-4993-add3-bcb26a36ba2d,Namespace:calico-system,Attempt:0,} returns sandbox id \"fec4b9c43b45e4c82976440011fcfe4518599a6dbe8ae4f839ed800dd2e61528\"" Jan 16 18:05:36.910937 containerd[1954]: time="2026-01-16T18:05:36.910868715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 18:05:36.915000 audit: BPF prog-id=234 op=LOAD Jan 16 18:05:36.919000 audit: BPF prog-id=235 op=LOAD Jan 16 18:05:36.919000 audit[5148]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5020 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366626564353739646439336565613363636132623438346364383430 Jan 16 18:05:36.921000 audit: BPF prog-id=235 op=UNLOAD Jan 16 18:05:36.921000 audit[5148]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5020 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366626564353739646439336565613363636132623438346364383430 Jan 16 18:05:36.921000 audit: BPF prog-id=236 op=LOAD Jan 16 18:05:36.921000 audit[5148]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5020 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366626564353739646439336565613363636132623438346364383430 Jan 16 18:05:36.921000 audit: BPF prog-id=237 op=LOAD Jan 16 18:05:36.921000 audit[5148]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5020 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366626564353739646439336565613363636132623438346364383430 Jan 16 18:05:36.922000 audit: BPF prog-id=237 op=UNLOAD Jan 16 18:05:36.922000 audit[5148]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5020 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366626564353739646439336565613363636132623438346364383430 Jan 16 18:05:36.923000 audit: BPF prog-id=236 op=UNLOAD Jan 16 18:05:36.923000 audit[5148]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5020 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366626564353739646439336565613363636132623438346364383430 Jan 16 18:05:36.924000 audit: BPF prog-id=238 op=LOAD Jan 16 18:05:36.924000 audit[5148]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5020 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:36.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366626564353739646439336565613363636132623438346364383430 Jan 16 18:05:36.934955 containerd[1954]: time="2026-01-16T18:05:36.934087887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88slq,Uid:74813863-8ca6-40d9-bd92-5b37511fc2e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"757ae4fa23537f92de4bcc9bd8f3f1731806474329819f1a7d67a58f55aa9fa9\"" Jan 16 18:05:37.005725 containerd[1954]: time="2026-01-16T18:05:37.005524067Z" level=info msg="StartContainer for \"cfbed579dd93eea3cca2b484cd8408faf6d6d2649b408791c4a870fc46e20fa0\" returns successfully" Jan 16 18:05:37.106152 kubelet[3428]: E0116 18:05:37.105100 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76db965947-cmgcq" podUID="d70fefd9-029e-4ddf-8559-5e71028d4fd0" Jan 16 18:05:37.173544 kubelet[3428]: I0116 18:05:37.173328 3428 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cbrj8" podStartSLOduration=53.17327904 podStartE2EDuration="53.17327904s" podCreationTimestamp="2026-01-16 18:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:05:37.169762812 +0000 UTC m=+57.727461732" watchObservedRunningTime="2026-01-16 18:05:37.17327904 +0000 UTC m=+57.730977936" Jan 16 18:05:37.197497 containerd[1954]: time="2026-01-16T18:05:37.197010252Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:37.197914 systemd-networkd[1870]: cali844c47ad770: Link UP Jan 16 18:05:37.201643 systemd-networkd[1870]: cali844c47ad770: Gained carrier Jan 16 18:05:37.202491 containerd[1954]: time="2026-01-16T18:05:37.202246020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:37.205578 containerd[1954]: time="2026-01-16T18:05:37.202709520Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 18:05:37.207281 kubelet[3428]: E0116 18:05:37.207027 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:05:37.207281 kubelet[3428]: E0116 18:05:37.207132 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:05:37.207858 kubelet[3428]: E0116 18:05:37.207449 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kk276,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-t9x2h_calico-system(3932bc10-72fd-4993-add3-bcb26a36ba2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:37.209087 containerd[1954]: time="2026-01-16T18:05:37.208382904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 18:05:37.211698 kubelet[3428]: E0116 18:05:37.211411 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t9x2h" podUID="3932bc10-72fd-4993-add3-bcb26a36ba2d" Jan 16 18:05:37.240000 audit[5223]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=5223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:37.240000 audit[5223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffeedf4e30 a2=0 a3=1 items=0 ppid=3574 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:37.240000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:37.254000 audit[5223]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=5223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:37.254000 audit[5223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffeedf4e30 a2=0 a3=1 items=0 ppid=3574 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:37.254000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:36.962 [INFO][5156] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--js6cv-eth0 calico-apiserver-54df9c5477- calico-apiserver 20a749f5-b28f-4523-9135-e8877a359519 827 0 2026-01-16 18:04:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54df9c5477 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-249 calico-apiserver-54df9c5477-js6cv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali844c47ad770 [] [] }} ContainerID="8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" Namespace="calico-apiserver" Pod="calico-apiserver-54df9c5477-js6cv" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--js6cv-" Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:36.962 [INFO][5156] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" Namespace="calico-apiserver" Pod="calico-apiserver-54df9c5477-js6cv" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--js6cv-eth0" Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.052 [INFO][5208] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" HandleID="k8s-pod-network.8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" Workload="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--js6cv-eth0" Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.053 [INFO][5208] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" HandleID="k8s-pod-network.8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" Workload="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--js6cv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-249", "pod":"calico-apiserver-54df9c5477-js6cv", "timestamp":"2026-01-16 18:05:37.052833336 +0000 UTC"}, Hostname:"ip-172-31-22-249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.053 [INFO][5208] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.053 [INFO][5208] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.053 [INFO][5208] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-249' Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.075 [INFO][5208] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" host="ip-172-31-22-249" Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.082 [INFO][5208] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-249" Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.095 [INFO][5208] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.103 [INFO][5208] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.126 [INFO][5208] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.127 [INFO][5208] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" host="ip-172-31-22-249" Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.148 [INFO][5208] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04 Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.160 [INFO][5208] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" host="ip-172-31-22-249" Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.179 [INFO][5208] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.6/26] block=192.168.54.0/26 handle="k8s-pod-network.8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" host="ip-172-31-22-249" Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.179 [INFO][5208] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.6/26] handle="k8s-pod-network.8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" host="ip-172-31-22-249" Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.180 [INFO][5208] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:05:37.257398 containerd[1954]: 2026-01-16 18:05:37.180 [INFO][5208] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.6/26] IPv6=[] ContainerID="8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" HandleID="k8s-pod-network.8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" Workload="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--js6cv-eth0" Jan 16 18:05:37.259694 containerd[1954]: 2026-01-16 18:05:37.185 [INFO][5156] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" Namespace="calico-apiserver" Pod="calico-apiserver-54df9c5477-js6cv" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--js6cv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--js6cv-eth0", GenerateName:"calico-apiserver-54df9c5477-", Namespace:"calico-apiserver", SelfLink:"", UID:"20a749f5-b28f-4523-9135-e8877a359519", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 4, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54df9c5477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-249", ContainerID:"", Pod:"calico-apiserver-54df9c5477-js6cv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali844c47ad770", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:05:37.259694 containerd[1954]: 2026-01-16 18:05:37.186 [INFO][5156] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.6/32] ContainerID="8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" Namespace="calico-apiserver" Pod="calico-apiserver-54df9c5477-js6cv" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--js6cv-eth0" Jan 16 18:05:37.259694 containerd[1954]: 2026-01-16 18:05:37.186 [INFO][5156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali844c47ad770 ContainerID="8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" Namespace="calico-apiserver" Pod="calico-apiserver-54df9c5477-js6cv" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--js6cv-eth0" Jan 16 18:05:37.259694 containerd[1954]: 2026-01-16 18:05:37.210 [INFO][5156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" Namespace="calico-apiserver" Pod="calico-apiserver-54df9c5477-js6cv" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--js6cv-eth0" Jan 16 18:05:37.259694 containerd[1954]: 2026-01-16 18:05:37.212 [INFO][5156] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" Namespace="calico-apiserver" Pod="calico-apiserver-54df9c5477-js6cv" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--js6cv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--js6cv-eth0", GenerateName:"calico-apiserver-54df9c5477-", Namespace:"calico-apiserver", SelfLink:"", UID:"20a749f5-b28f-4523-9135-e8877a359519", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 4, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54df9c5477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-249", ContainerID:"8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04", Pod:"calico-apiserver-54df9c5477-js6cv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali844c47ad770", MAC:"c2:94:98:d3:e0:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:05:37.259694 containerd[1954]: 2026-01-16 18:05:37.251 [INFO][5156] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" Namespace="calico-apiserver" Pod="calico-apiserver-54df9c5477-js6cv" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--js6cv-eth0" Jan 16 18:05:37.321000 audit[5230]: NETFILTER_CFG table=filter:137 family=2 entries=68 op=nft_register_chain pid=5230 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:05:37.321000 audit[5230]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=34608 a0=3 a1=fffffe718030 a2=0 a3=ffffb1862fa8 items=0 ppid=4594 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:37.321000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:05:37.331768 containerd[1954]: time="2026-01-16T18:05:37.331676497Z" level=info msg="connecting to shim 8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04" address="unix:///run/containerd/s/a4cb28f0aed86fa7fe89ebcd0304172a3f3074a4ec7ec09e02458c223188fbf6" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:05:37.383485 systemd[1]: Started cri-containerd-8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04.scope - libcontainer container 8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04. Jan 16 18:05:37.413000 audit: BPF prog-id=239 op=LOAD Jan 16 18:05:37.415000 audit: BPF prog-id=240 op=LOAD Jan 16 18:05:37.415000 audit[5249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=5239 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:37.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864323336333763343266656163353830386635316433616263323364 Jan 16 18:05:37.416000 audit: BPF prog-id=240 op=UNLOAD Jan 16 18:05:37.416000 audit[5249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5239 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:37.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864323336333763343266656163353830386635316433616263323364 Jan 16 18:05:37.416000 audit: BPF prog-id=241 op=LOAD Jan 16 18:05:37.416000 audit[5249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=5239 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:37.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864323336333763343266656163353830386635316433616263323364 Jan 16 18:05:37.418000 audit: BPF prog-id=242 op=LOAD Jan 16 18:05:37.418000 audit[5249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=5239 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:37.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864323336333763343266656163353830386635316433616263323364 Jan 16 18:05:37.418000 audit: BPF prog-id=242 op=UNLOAD Jan 16 18:05:37.418000 audit[5249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5239 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:37.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864323336333763343266656163353830386635316433616263323364 Jan 16 18:05:37.419000 audit: BPF prog-id=241 op=UNLOAD Jan 16 18:05:37.419000 audit[5249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5239 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:37.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864323336333763343266656163353830386635316433616263323364 Jan 16 18:05:37.420000 audit: BPF prog-id=243 op=LOAD Jan 16 18:05:37.420000 audit[5249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=5239 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:37.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864323336333763343266656163353830386635316433616263323364 Jan 16 18:05:37.490209 containerd[1954]: time="2026-01-16T18:05:37.490138286Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:37.492511 containerd[1954]: time="2026-01-16T18:05:37.492413138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 18:05:37.492642 containerd[1954]: time="2026-01-16T18:05:37.492564398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:37.492969 kubelet[3428]: E0116 18:05:37.492884 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:05:37.493056 kubelet[3428]: E0116 18:05:37.492974 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:05:37.494259 kubelet[3428]: E0116 18:05:37.494154 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4p5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-88slq_calico-system(74813863-8ca6-40d9-bd92-5b37511fc2e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:37.497802 containerd[1954]: time="2026-01-16T18:05:37.497733866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 18:05:37.587615 containerd[1954]: time="2026-01-16T18:05:37.587517614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54df9c5477-js6cv,Uid:20a749f5-b28f-4523-9135-e8877a359519,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8d23637c42feac5808f51d3abc23d77ac23f5f7814bca937d0c92764837c8d04\"" Jan 16 18:05:37.736890 containerd[1954]: time="2026-01-16T18:05:37.736745367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z6xnx,Uid:73afd704-2c12-49ff-a165-c96fb49c636e,Namespace:kube-system,Attempt:0,}" Jan 16 18:05:37.738034 systemd-networkd[1870]: cali9124d392eb1: Gained IPv6LL Jan 16 18:05:37.769157 containerd[1954]: time="2026-01-16T18:05:37.768080691Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:37.771666 containerd[1954]: time="2026-01-16T18:05:37.771599211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 18:05:37.772809 containerd[1954]: time="2026-01-16T18:05:37.771669303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:37.774554 kubelet[3428]: E0116 18:05:37.774411 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:05:37.776394 kubelet[3428]: E0116 18:05:37.774569 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:05:37.776394 kubelet[3428]: E0116 18:05:37.774837 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4p5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-88slq_calico-system(74813863-8ca6-40d9-bd92-5b37511fc2e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:37.776650 containerd[1954]: time="2026-01-16T18:05:37.775852059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:05:37.777656 kubelet[3428]: E0116 18:05:37.777312 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-88slq" podUID="74813863-8ca6-40d9-bd92-5b37511fc2e0" Jan 16 18:05:37.928363 systemd-networkd[1870]: cali7885ee761de: Gained IPv6LL Jan 16 18:05:38.046824 systemd-networkd[1870]: cali6f8b42fd802: Link UP Jan 16 18:05:38.051684 systemd-networkd[1870]: cali6f8b42fd802: Gained carrier Jan 16 18:05:38.065621 containerd[1954]: time="2026-01-16T18:05:38.065571133Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:38.071642 containerd[1954]: time="2026-01-16T18:05:38.070772401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:38.072854 containerd[1954]: time="2026-01-16T18:05:38.071296789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:05:38.074286 kubelet[3428]: E0116 18:05:38.074164 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:05:38.074286 kubelet[3428]: E0116 18:05:38.074247 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:05:38.075414 kubelet[3428]: E0116 18:05:38.074475 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkph5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54df9c5477-js6cv_calico-apiserver(20a749f5-b28f-4523-9135-e8877a359519): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:38.076022 kubelet[3428]: E0116 18:05:38.075770 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-js6cv" podUID="20a749f5-b28f-4523-9135-e8877a359519" Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:37.871 [INFO][5280] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--249-k8s-coredns--668d6bf9bc--z6xnx-eth0 coredns-668d6bf9bc- kube-system 73afd704-2c12-49ff-a165-c96fb49c636e 825 0 2026-01-16 18:04:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-249 coredns-668d6bf9bc-z6xnx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6f8b42fd802 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" Namespace="kube-system" Pod="coredns-668d6bf9bc-z6xnx" WorkloadEndpoint="ip--172--31--22--249-k8s-coredns--668d6bf9bc--z6xnx-" Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:37.872 [INFO][5280] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" Namespace="kube-system" Pod="coredns-668d6bf9bc-z6xnx" WorkloadEndpoint="ip--172--31--22--249-k8s-coredns--668d6bf9bc--z6xnx-eth0" Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:37.964 [INFO][5292] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" HandleID="k8s-pod-network.046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" Workload="ip--172--31--22--249-k8s-coredns--668d6bf9bc--z6xnx-eth0" Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:37.966 [INFO][5292] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" HandleID="k8s-pod-network.046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" Workload="ip--172--31--22--249-k8s-coredns--668d6bf9bc--z6xnx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000331710), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-249", "pod":"coredns-668d6bf9bc-z6xnx", "timestamp":"2026-01-16 18:05:37.964433704 +0000 UTC"}, Hostname:"ip-172-31-22-249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:37.966 [INFO][5292] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:37.966 [INFO][5292] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:37.966 [INFO][5292] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-249' Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:37.983 [INFO][5292] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" host="ip-172-31-22-249" Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:37.993 [INFO][5292] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-249" Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:38.002 [INFO][5292] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:38.008 [INFO][5292] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:38.012 [INFO][5292] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:38.013 [INFO][5292] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" host="ip-172-31-22-249" Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:38.015 [INFO][5292] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766 Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:38.023 [INFO][5292] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" host="ip-172-31-22-249" Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:38.036 [INFO][5292] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.7/26] block=192.168.54.0/26 handle="k8s-pod-network.046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" host="ip-172-31-22-249" Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:38.036 [INFO][5292] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.7/26] handle="k8s-pod-network.046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" host="ip-172-31-22-249" Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:38.036 [INFO][5292] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:05:38.077207 containerd[1954]: 2026-01-16 18:05:38.036 [INFO][5292] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.7/26] IPv6=[] ContainerID="046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" HandleID="k8s-pod-network.046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" Workload="ip--172--31--22--249-k8s-coredns--668d6bf9bc--z6xnx-eth0" Jan 16 18:05:38.081538 containerd[1954]: 2026-01-16 18:05:38.040 [INFO][5280] cni-plugin/k8s.go 418: Populated endpoint ContainerID="046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" Namespace="kube-system" Pod="coredns-668d6bf9bc-z6xnx" WorkloadEndpoint="ip--172--31--22--249-k8s-coredns--668d6bf9bc--z6xnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--249-k8s-coredns--668d6bf9bc--z6xnx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"73afd704-2c12-49ff-a165-c96fb49c636e", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 4, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-249", ContainerID:"", Pod:"coredns-668d6bf9bc-z6xnx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f8b42fd802", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:05:38.081538 containerd[1954]: 2026-01-16 18:05:38.041 [INFO][5280] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.7/32] ContainerID="046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" Namespace="kube-system" Pod="coredns-668d6bf9bc-z6xnx" WorkloadEndpoint="ip--172--31--22--249-k8s-coredns--668d6bf9bc--z6xnx-eth0" Jan 16 18:05:38.081538 containerd[1954]: 2026-01-16 18:05:38.041 [INFO][5280] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f8b42fd802 ContainerID="046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" Namespace="kube-system" Pod="coredns-668d6bf9bc-z6xnx" WorkloadEndpoint="ip--172--31--22--249-k8s-coredns--668d6bf9bc--z6xnx-eth0" Jan 16 18:05:38.081538 containerd[1954]: 2026-01-16 18:05:38.047 [INFO][5280] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" Namespace="kube-system" Pod="coredns-668d6bf9bc-z6xnx" WorkloadEndpoint="ip--172--31--22--249-k8s-coredns--668d6bf9bc--z6xnx-eth0" Jan 16 18:05:38.081538 containerd[1954]: 2026-01-16 18:05:38.048 [INFO][5280] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" Namespace="kube-system" Pod="coredns-668d6bf9bc-z6xnx" WorkloadEndpoint="ip--172--31--22--249-k8s-coredns--668d6bf9bc--z6xnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--249-k8s-coredns--668d6bf9bc--z6xnx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"73afd704-2c12-49ff-a165-c96fb49c636e", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 4, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-249", ContainerID:"046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766", Pod:"coredns-668d6bf9bc-z6xnx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f8b42fd802", MAC:"76:73:ed:d6:56:f5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:05:38.081538 containerd[1954]: 2026-01-16 18:05:38.069 [INFO][5280] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" Namespace="kube-system" Pod="coredns-668d6bf9bc-z6xnx" WorkloadEndpoint="ip--172--31--22--249-k8s-coredns--668d6bf9bc--z6xnx-eth0" Jan 16 18:05:38.114865 kubelet[3428]: E0116 18:05:38.114675 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-js6cv" podUID="20a749f5-b28f-4523-9135-e8877a359519" Jan 16 18:05:38.124293 systemd-networkd[1870]: calib500d6e3d92: Gained IPv6LL Jan 16 18:05:38.131991 kubelet[3428]: E0116 18:05:38.128672 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-88slq" podUID="74813863-8ca6-40d9-bd92-5b37511fc2e0" Jan 16 18:05:38.133477 kubelet[3428]: E0116 18:05:38.133287 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t9x2h" podUID="3932bc10-72fd-4993-add3-bcb26a36ba2d" Jan 16 18:05:38.155011 containerd[1954]: time="2026-01-16T18:05:38.154929949Z" level=info msg="connecting to shim 046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766" address="unix:///run/containerd/s/d75229c96949d81a08cf01ad148c8c96f78d2b332e3672a72cef761904a7cec5" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:05:38.209000 audit[5332]: NETFILTER_CFG table=filter:138 family=2 entries=44 op=nft_register_chain pid=5332 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:05:38.209000 audit[5332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21500 a0=3 a1=fffffaf89600 a2=0 a3=ffffa19effa8 items=0 ppid=4594 pid=5332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.209000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:05:38.252794 systemd[1]: Started cri-containerd-046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766.scope - libcontainer container 046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766. Jan 16 18:05:38.294000 audit[5349]: NETFILTER_CFG table=filter:139 family=2 entries=20 op=nft_register_rule pid=5349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:38.294000 audit[5349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffed34c1e0 a2=0 a3=1 items=0 ppid=3574 pid=5349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.294000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:38.305000 audit[5349]: NETFILTER_CFG table=nat:140 family=2 entries=14 op=nft_register_rule pid=5349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:38.305000 audit[5349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffed34c1e0 a2=0 a3=1 items=0 ppid=3574 pid=5349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.305000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:38.318000 audit: BPF prog-id=244 op=LOAD Jan 16 18:05:38.320000 audit: BPF prog-id=245 op=LOAD Jan 16 18:05:38.320000 audit[5328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=5316 pid=5328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034366631336336396136653535376661643431633739633332346338 Jan 16 18:05:38.320000 audit: BPF prog-id=245 op=UNLOAD Jan 16 18:05:38.320000 audit[5328]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5316 pid=5328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034366631336336396136653535376661643431633739633332346338 Jan 16 18:05:38.320000 audit: BPF prog-id=246 op=LOAD Jan 16 18:05:38.320000 audit[5328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=5316 pid=5328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034366631336336396136653535376661643431633739633332346338 Jan 16 18:05:38.320000 audit: BPF prog-id=247 op=LOAD Jan 16 18:05:38.320000 audit[5328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=5316 pid=5328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034366631336336396136653535376661643431633739633332346338 Jan 16 18:05:38.320000 audit: BPF prog-id=247 op=UNLOAD Jan 16 18:05:38.320000 audit[5328]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5316 pid=5328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034366631336336396136653535376661643431633739633332346338 Jan 16 18:05:38.321000 audit: BPF prog-id=246 op=UNLOAD Jan 16 18:05:38.321000 audit[5328]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5316 pid=5328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034366631336336396136653535376661643431633739633332346338 Jan 16 18:05:38.321000 audit: BPF prog-id=248 op=LOAD Jan 16 18:05:38.321000 audit[5328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=5316 pid=5328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034366631336336396136653535376661643431633739633332346338 Jan 16 18:05:38.367000 audit[5351]: NETFILTER_CFG table=filter:141 family=2 entries=20 op=nft_register_rule pid=5351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:38.367000 audit[5351]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc00b01a0 a2=0 a3=1 items=0 ppid=3574 pid=5351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.367000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:38.372000 audit[5351]: NETFILTER_CFG table=nat:142 family=2 entries=14 op=nft_register_rule pid=5351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:38.372000 audit[5351]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc00b01a0 a2=0 a3=1 items=0 ppid=3574 pid=5351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.372000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:38.401476 containerd[1954]: time="2026-01-16T18:05:38.401414186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z6xnx,Uid:73afd704-2c12-49ff-a165-c96fb49c636e,Namespace:kube-system,Attempt:0,} returns sandbox id \"046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766\"" Jan 16 18:05:38.407795 containerd[1954]: time="2026-01-16T18:05:38.407639534Z" level=info msg="CreateContainer within sandbox \"046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 18:05:38.429479 containerd[1954]: time="2026-01-16T18:05:38.429414939Z" level=info msg="Container 2831425c7d186bf8c7d06614d8ddc873bcbe6b71bc7e275d9d94f70f3d2c38a5: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:05:38.447145 containerd[1954]: time="2026-01-16T18:05:38.447060951Z" level=info msg="CreateContainer within sandbox \"046f13c69a6e557fad41c79c324c80ad5e9a0793ed0038f4f30e4a98f2e98766\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2831425c7d186bf8c7d06614d8ddc873bcbe6b71bc7e275d9d94f70f3d2c38a5\"" Jan 16 18:05:38.448660 containerd[1954]: time="2026-01-16T18:05:38.448600383Z" level=info msg="StartContainer for \"2831425c7d186bf8c7d06614d8ddc873bcbe6b71bc7e275d9d94f70f3d2c38a5\"" Jan 16 18:05:38.450892 containerd[1954]: time="2026-01-16T18:05:38.450808587Z" level=info msg="connecting to shim 2831425c7d186bf8c7d06614d8ddc873bcbe6b71bc7e275d9d94f70f3d2c38a5" address="unix:///run/containerd/s/d75229c96949d81a08cf01ad148c8c96f78d2b332e3672a72cef761904a7cec5" protocol=ttrpc version=3 Jan 16 18:05:38.491481 systemd[1]: Started cri-containerd-2831425c7d186bf8c7d06614d8ddc873bcbe6b71bc7e275d9d94f70f3d2c38a5.scope - libcontainer container 2831425c7d186bf8c7d06614d8ddc873bcbe6b71bc7e275d9d94f70f3d2c38a5. Jan 16 18:05:38.517000 audit: BPF prog-id=249 op=LOAD Jan 16 18:05:38.518000 audit: BPF prog-id=250 op=LOAD Jan 16 18:05:38.518000 audit[5359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=5316 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238333134323563376431383662663863376430363631346438646463 Jan 16 18:05:38.518000 audit: BPF prog-id=250 op=UNLOAD Jan 16 18:05:38.518000 audit[5359]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5316 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238333134323563376431383662663863376430363631346438646463 Jan 16 18:05:38.518000 audit: BPF prog-id=251 op=LOAD Jan 16 18:05:38.518000 audit[5359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=5316 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238333134323563376431383662663863376430363631346438646463 Jan 16 18:05:38.519000 audit: BPF prog-id=252 op=LOAD Jan 16 18:05:38.519000 audit[5359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=5316 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238333134323563376431383662663863376430363631346438646463 Jan 16 18:05:38.519000 audit: BPF prog-id=252 op=UNLOAD Jan 16 18:05:38.519000 audit[5359]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5316 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238333134323563376431383662663863376430363631346438646463 Jan 16 18:05:38.519000 audit: BPF prog-id=251 op=UNLOAD Jan 16 18:05:38.519000 audit[5359]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5316 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238333134323563376431383662663863376430363631346438646463 Jan 16 18:05:38.519000 audit: BPF prog-id=253 op=LOAD Jan 16 18:05:38.519000 audit[5359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=5316 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:38.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238333134323563376431383662663863376430363631346438646463 Jan 16 18:05:38.558540 containerd[1954]: time="2026-01-16T18:05:38.558285855Z" level=info msg="StartContainer for \"2831425c7d186bf8c7d06614d8ddc873bcbe6b71bc7e275d9d94f70f3d2c38a5\" returns successfully" Jan 16 18:05:38.697061 systemd-networkd[1870]: cali844c47ad770: Gained IPv6LL Jan 16 18:05:38.735078 containerd[1954]: time="2026-01-16T18:05:38.734999836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54df9c5477-tm29z,Uid:8adc081a-39d6-4153-ae00-f3df7e2ba175,Namespace:calico-apiserver,Attempt:0,}" Jan 16 18:05:38.966547 systemd-networkd[1870]: cali1ffb8ad1cdf: Link UP Jan 16 18:05:38.971455 systemd-networkd[1870]: cali1ffb8ad1cdf: Gained carrier Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.810 [INFO][5393] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--tm29z-eth0 calico-apiserver-54df9c5477- calico-apiserver 8adc081a-39d6-4153-ae00-f3df7e2ba175 829 0 2026-01-16 18:04:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54df9c5477 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-249 calico-apiserver-54df9c5477-tm29z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1ffb8ad1cdf [] [] }} ContainerID="836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" Namespace="calico-apiserver" Pod="calico-apiserver-54df9c5477-tm29z" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--tm29z-" Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.811 [INFO][5393] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" Namespace="calico-apiserver" Pod="calico-apiserver-54df9c5477-tm29z" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--tm29z-eth0" Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.863 [INFO][5404] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" HandleID="k8s-pod-network.836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" Workload="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--tm29z-eth0" Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.863 [INFO][5404] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" HandleID="k8s-pod-network.836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" Workload="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--tm29z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000122e30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-249", "pod":"calico-apiserver-54df9c5477-tm29z", "timestamp":"2026-01-16 18:05:38.863604881 +0000 UTC"}, Hostname:"ip-172-31-22-249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.863 [INFO][5404] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.864 [INFO][5404] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.864 [INFO][5404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-249' Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.878 [INFO][5404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" host="ip-172-31-22-249" Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.890 [INFO][5404] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-249" Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.899 [INFO][5404] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.906 [INFO][5404] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.912 [INFO][5404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ip-172-31-22-249" Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.912 [INFO][5404] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" host="ip-172-31-22-249" Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.916 [INFO][5404] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24 Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.943 [INFO][5404] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" host="ip-172-31-22-249" Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.955 [INFO][5404] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.8/26] block=192.168.54.0/26 handle="k8s-pod-network.836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" host="ip-172-31-22-249" Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.955 [INFO][5404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.8/26] handle="k8s-pod-network.836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" host="ip-172-31-22-249" Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.955 [INFO][5404] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:05:39.006436 containerd[1954]: 2026-01-16 18:05:38.955 [INFO][5404] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.8/26] IPv6=[] ContainerID="836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" HandleID="k8s-pod-network.836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" Workload="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--tm29z-eth0" Jan 16 18:05:39.009371 containerd[1954]: 2026-01-16 18:05:38.960 [INFO][5393] cni-plugin/k8s.go 418: Populated endpoint ContainerID="836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" Namespace="calico-apiserver" Pod="calico-apiserver-54df9c5477-tm29z" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--tm29z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--tm29z-eth0", GenerateName:"calico-apiserver-54df9c5477-", Namespace:"calico-apiserver", SelfLink:"", UID:"8adc081a-39d6-4153-ae00-f3df7e2ba175", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 4, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54df9c5477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-249", ContainerID:"", Pod:"calico-apiserver-54df9c5477-tm29z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ffb8ad1cdf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:05:39.009371 containerd[1954]: 2026-01-16 18:05:38.960 [INFO][5393] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.8/32] ContainerID="836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" Namespace="calico-apiserver" Pod="calico-apiserver-54df9c5477-tm29z" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--tm29z-eth0" Jan 16 18:05:39.009371 containerd[1954]: 2026-01-16 18:05:38.960 [INFO][5393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ffb8ad1cdf ContainerID="836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" Namespace="calico-apiserver" Pod="calico-apiserver-54df9c5477-tm29z" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--tm29z-eth0" Jan 16 18:05:39.009371 containerd[1954]: 2026-01-16 18:05:38.967 [INFO][5393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" Namespace="calico-apiserver" Pod="calico-apiserver-54df9c5477-tm29z" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--tm29z-eth0" Jan 16 18:05:39.009371 containerd[1954]: 2026-01-16 18:05:38.972 [INFO][5393] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" Namespace="calico-apiserver" Pod="calico-apiserver-54df9c5477-tm29z" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--tm29z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--tm29z-eth0", GenerateName:"calico-apiserver-54df9c5477-", Namespace:"calico-apiserver", SelfLink:"", UID:"8adc081a-39d6-4153-ae00-f3df7e2ba175", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 4, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54df9c5477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-249", ContainerID:"836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24", Pod:"calico-apiserver-54df9c5477-tm29z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ffb8ad1cdf", MAC:"d2:75:ed:77:26:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:05:39.009371 containerd[1954]: 2026-01-16 18:05:39.001 [INFO][5393] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" Namespace="calico-apiserver" Pod="calico-apiserver-54df9c5477-tm29z" WorkloadEndpoint="ip--172--31--22--249-k8s-calico--apiserver--54df9c5477--tm29z-eth0" Jan 16 18:05:39.056000 audit[5418]: NETFILTER_CFG table=filter:143 family=2 entries=53 op=nft_register_chain pid=5418 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:05:39.056000 audit[5418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26608 a0=3 a1=ffffecb48d00 a2=0 a3=ffff9439cfa8 items=0 ppid=4594 pid=5418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:39.056000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:05:39.066284 containerd[1954]: time="2026-01-16T18:05:39.066207758Z" level=info msg="connecting to shim 836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24" address="unix:///run/containerd/s/b4aaf5fce61ebfac095ff63fb96aa3dfd6f57575cd32b4cb233085eb92e8eb61" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:05:39.114530 systemd[1]: Started cri-containerd-836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24.scope - libcontainer container 836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24. Jan 16 18:05:39.126797 kubelet[3428]: E0116 18:05:39.126576 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-js6cv" podUID="20a749f5-b28f-4523-9135-e8877a359519" Jan 16 18:05:39.169478 kubelet[3428]: I0116 18:05:39.168857 3428 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z6xnx" podStartSLOduration=55.16883105 podStartE2EDuration="55.16883105s" podCreationTimestamp="2026-01-16 18:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:05:39.147690566 +0000 UTC m=+59.705389474" watchObservedRunningTime="2026-01-16 18:05:39.16883105 +0000 UTC m=+59.726529922" Jan 16 18:05:39.178000 audit: BPF prog-id=254 op=LOAD Jan 16 18:05:39.178000 audit: BPF prog-id=255 op=LOAD Jan 16 18:05:39.178000 audit[5440]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5428 pid=5440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:39.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833366337316531366439643932393339376334613735663235643564 Jan 16 18:05:39.179000 audit: BPF prog-id=255 op=UNLOAD Jan 16 18:05:39.179000 audit[5440]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5428 pid=5440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:39.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833366337316531366439643932393339376334613735663235643564 Jan 16 18:05:39.179000 audit: BPF prog-id=256 op=LOAD Jan 16 18:05:39.179000 audit[5440]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5428 pid=5440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:39.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833366337316531366439643932393339376334613735663235643564 Jan 16 18:05:39.179000 audit: BPF prog-id=257 op=LOAD Jan 16 18:05:39.179000 audit[5440]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5428 pid=5440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:39.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833366337316531366439643932393339376334613735663235643564 Jan 16 18:05:39.179000 audit: BPF prog-id=257 op=UNLOAD Jan 16 18:05:39.179000 audit[5440]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5428 pid=5440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:39.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833366337316531366439643932393339376334613735663235643564 Jan 16 18:05:39.179000 audit: BPF prog-id=256 op=UNLOAD Jan 16 18:05:39.179000 audit[5440]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5428 pid=5440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:39.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833366337316531366439643932393339376334613735663235643564 Jan 16 18:05:39.180000 audit: BPF prog-id=258 op=LOAD Jan 16 18:05:39.180000 audit[5440]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5428 pid=5440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:39.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833366337316531366439643932393339376334613735663235643564 Jan 16 18:05:39.275309 containerd[1954]: time="2026-01-16T18:05:39.272649975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54df9c5477-tm29z,Uid:8adc081a-39d6-4153-ae00-f3df7e2ba175,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"836c71e16d9d929397c4a75f25d5dc8ca2e6be2941a8bd9b29d2ecaa5cbcdf24\"" Jan 16 18:05:39.279181 containerd[1954]: time="2026-01-16T18:05:39.278984595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:05:39.410000 audit[5468]: NETFILTER_CFG table=filter:144 family=2 entries=17 op=nft_register_rule pid=5468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:39.410000 audit[5468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffb7afac0 a2=0 a3=1 items=0 ppid=3574 pid=5468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:39.410000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:39.417000 audit[5468]: NETFILTER_CFG table=nat:145 family=2 entries=35 op=nft_register_chain pid=5468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:39.417000 audit[5468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffffb7afac0 a2=0 a3=1 items=0 ppid=3574 pid=5468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:39.417000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:39.599776 containerd[1954]: time="2026-01-16T18:05:39.599557480Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:39.601909 containerd[1954]: time="2026-01-16T18:05:39.601770112Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:05:39.601909 containerd[1954]: time="2026-01-16T18:05:39.601842688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:39.602203 kubelet[3428]: E0116 18:05:39.602095 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:05:39.602293 kubelet[3428]: E0116 18:05:39.602208 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:05:39.602669 kubelet[3428]: E0116 18:05:39.602557 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rbr88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54df9c5477-tm29z_calico-apiserver(8adc081a-39d6-4153-ae00-f3df7e2ba175): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:39.604105 kubelet[3428]: E0116 18:05:39.603846 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-tm29z" podUID="8adc081a-39d6-4153-ae00-f3df7e2ba175" Jan 16 18:05:39.720388 systemd-networkd[1870]: cali6f8b42fd802: Gained IPv6LL Jan 16 18:05:40.135631 kubelet[3428]: E0116 18:05:40.135463 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-tm29z" podUID="8adc081a-39d6-4153-ae00-f3df7e2ba175" Jan 16 18:05:40.360639 systemd-networkd[1870]: cali1ffb8ad1cdf: Gained IPv6LL Jan 16 18:05:40.482977 kernel: kauditd_printk_skb: 233 callbacks suppressed Jan 16 18:05:40.483158 kernel: audit: type=1325 audit(1768586740.479:748): table=filter:146 family=2 entries=14 op=nft_register_rule pid=5477 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:40.479000 audit[5477]: NETFILTER_CFG table=filter:146 family=2 entries=14 op=nft_register_rule pid=5477 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:40.479000 audit[5477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcaf69850 a2=0 a3=1 items=0 ppid=3574 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:40.492939 kernel: audit: type=1300 audit(1768586740.479:748): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcaf69850 a2=0 a3=1 items=0 ppid=3574 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:40.493046 kernel: audit: type=1327 audit(1768586740.479:748): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:40.479000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:40.498000 audit[5477]: NETFILTER_CFG table=nat:147 family=2 entries=56 op=nft_register_chain pid=5477 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:40.498000 audit[5477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffcaf69850 a2=0 a3=1 items=0 ppid=3574 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:40.510632 kernel: audit: type=1325 audit(1768586740.498:749): table=nat:147 family=2 entries=56 op=nft_register_chain pid=5477 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:05:40.510739 kernel: audit: type=1300 audit(1768586740.498:749): arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffcaf69850 a2=0 a3=1 items=0 ppid=3574 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:40.498000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:40.514734 kernel: audit: type=1327 audit(1768586740.498:749): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:05:41.134364 kubelet[3428]: E0116 18:05:41.133534 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-tm29z" podUID="8adc081a-39d6-4153-ae00-f3df7e2ba175" Jan 16 18:05:43.225974 ntpd[1928]: Listen normally on 7 vxlan.calico 192.168.54.0:123 Jan 16 18:05:43.226067 ntpd[1928]: Listen normally on 8 calibdc70b36c92 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 16 18:05:43.226975 ntpd[1928]: 16 Jan 18:05:43 ntpd[1928]: Listen normally on 7 vxlan.calico 192.168.54.0:123 Jan 16 18:05:43.226975 ntpd[1928]: 16 Jan 18:05:43 ntpd[1928]: Listen normally on 8 calibdc70b36c92 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 16 18:05:43.226975 ntpd[1928]: 16 Jan 18:05:43 ntpd[1928]: Listen normally on 9 vxlan.calico [fe80::64a6:7fff:fe24:f8bb%5]:123 Jan 16 18:05:43.226975 ntpd[1928]: 16 Jan 18:05:43 ntpd[1928]: Listen normally on 10 cali8df69e2e6ce [fe80::ecee:eeff:feee:eeee%8]:123 Jan 16 18:05:43.226975 ntpd[1928]: 16 Jan 18:05:43 ntpd[1928]: Listen normally on 11 cali7885ee761de [fe80::ecee:eeff:feee:eeee%9]:123 Jan 16 18:05:43.226975 ntpd[1928]: 16 Jan 18:05:43 ntpd[1928]: Listen normally on 12 calib500d6e3d92 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 16 18:05:43.226975 ntpd[1928]: 16 Jan 18:05:43 ntpd[1928]: Listen normally on 13 cali9124d392eb1 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 16 18:05:43.226975 ntpd[1928]: 16 Jan 18:05:43 ntpd[1928]: Listen normally on 14 cali844c47ad770 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 16 18:05:43.226975 ntpd[1928]: 16 Jan 18:05:43 ntpd[1928]: Listen normally on 15 cali6f8b42fd802 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 16 18:05:43.226975 ntpd[1928]: 16 Jan 18:05:43 ntpd[1928]: Listen normally on 16 cali1ffb8ad1cdf [fe80::ecee:eeff:feee:eeee%14]:123 Jan 16 18:05:43.226169 ntpd[1928]: Listen normally on 9 vxlan.calico [fe80::64a6:7fff:fe24:f8bb%5]:123 Jan 16 18:05:43.226220 ntpd[1928]: Listen normally on 10 cali8df69e2e6ce [fe80::ecee:eeff:feee:eeee%8]:123 Jan 16 18:05:43.226266 ntpd[1928]: Listen normally on 11 cali7885ee761de [fe80::ecee:eeff:feee:eeee%9]:123 Jan 16 18:05:43.226311 ntpd[1928]: Listen normally on 12 calib500d6e3d92 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 16 18:05:43.226356 ntpd[1928]: Listen normally on 13 cali9124d392eb1 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 16 18:05:43.226402 ntpd[1928]: Listen normally on 14 cali844c47ad770 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 16 18:05:43.226447 ntpd[1928]: Listen normally on 15 cali6f8b42fd802 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 16 18:05:43.226490 ntpd[1928]: Listen normally on 16 cali1ffb8ad1cdf [fe80::ecee:eeff:feee:eeee%14]:123 Jan 16 18:05:47.741272 containerd[1954]: time="2026-01-16T18:05:47.739526161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 18:05:48.045911 containerd[1954]: time="2026-01-16T18:05:48.045615982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:48.048030 containerd[1954]: time="2026-01-16T18:05:48.047889550Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 18:05:48.048207 containerd[1954]: time="2026-01-16T18:05:48.047963626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:48.048331 kubelet[3428]: E0116 18:05:48.048251 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:05:48.049255 kubelet[3428]: E0116 18:05:48.048322 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:05:48.049255 kubelet[3428]: E0116 18:05:48.048552 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:90e9697f810349ed85f10d175bcdf8e8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g74js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c9648cbf-qbkfw_calico-system(170a3e0a-b54a-4909-a38e-9cdfe9da4171): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:48.049655 containerd[1954]: time="2026-01-16T18:05:48.049560934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 18:05:48.068650 systemd[1]: Started sshd@9-172.31.22.249:22-4.153.228.146:33000.service - OpenSSH per-connection server daemon (4.153.228.146:33000). Jan 16 18:05:48.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.22.249:22-4.153.228.146:33000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:48.080155 kernel: audit: type=1130 audit(1768586748.069:750): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.22.249:22-4.153.228.146:33000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:48.342289 containerd[1954]: time="2026-01-16T18:05:48.342212172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:48.344726 containerd[1954]: time="2026-01-16T18:05:48.344455716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 18:05:48.344944 containerd[1954]: time="2026-01-16T18:05:48.344552040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:48.345380 kubelet[3428]: E0116 18:05:48.345083 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:05:48.345380 kubelet[3428]: E0116 18:05:48.345267 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:05:48.346322 kubelet[3428]: E0116 18:05:48.345566 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hgjq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76db965947-cmgcq_calico-system(d70fefd9-029e-4ddf-8559-5e71028d4fd0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:48.347159 kubelet[3428]: E0116 18:05:48.346814 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76db965947-cmgcq" podUID="d70fefd9-029e-4ddf-8559-5e71028d4fd0" Jan 16 18:05:48.347374 containerd[1954]: time="2026-01-16T18:05:48.346986948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 18:05:48.634228 containerd[1954]: time="2026-01-16T18:05:48.633320737Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:48.635940 containerd[1954]: time="2026-01-16T18:05:48.635828053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:48.635940 containerd[1954]: time="2026-01-16T18:05:48.635885209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 18:05:48.636891 kubelet[3428]: E0116 18:05:48.636630 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:05:48.637481 kubelet[3428]: E0116 18:05:48.636693 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:05:48.638136 kubelet[3428]: E0116 18:05:48.638018 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g74js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c9648cbf-qbkfw_calico-system(170a3e0a-b54a-4909-a38e-9cdfe9da4171): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:48.639576 kubelet[3428]: E0116 18:05:48.639506 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c9648cbf-qbkfw" podUID="170a3e0a-b54a-4909-a38e-9cdfe9da4171" Jan 16 18:05:48.649000 audit[5494]: USER_ACCT pid=5494 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:48.656603 sshd[5494]: Accepted publickey for core from 4.153.228.146 port 33000 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:05:48.657253 kernel: audit: type=1101 audit(1768586748.649:751): pid=5494 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:48.657000 audit[5494]: CRED_ACQ pid=5494 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:48.661752 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:05:48.668688 kernel: audit: type=1103 audit(1768586748.657:752): pid=5494 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:48.668797 kernel: audit: type=1006 audit(1768586748.657:753): pid=5494 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 16 18:05:48.657000 audit[5494]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe50b4fd0 a2=3 a3=0 items=0 ppid=1 pid=5494 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:48.673789 systemd-logind[1935]: New session 11 of user core. Jan 16 18:05:48.679324 kernel: audit: type=1300 audit(1768586748.657:753): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe50b4fd0 a2=3 a3=0 items=0 ppid=1 pid=5494 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:48.679398 kernel: audit: type=1327 audit(1768586748.657:753): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:05:48.657000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:05:48.686449 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 18:05:48.694000 audit[5494]: USER_START pid=5494 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:48.704246 kernel: audit: type=1105 audit(1768586748.694:754): pid=5494 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:48.704359 kernel: audit: type=1103 audit(1768586748.702:755): pid=5498 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:48.702000 audit[5498]: CRED_ACQ pid=5498 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:49.068288 sshd[5498]: Connection closed by 4.153.228.146 port 33000 Jan 16 18:05:49.070408 sshd-session[5494]: pam_unix(sshd:session): session closed for user core Jan 16 18:05:49.074000 audit[5494]: USER_END pid=5494 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:49.080297 systemd-logind[1935]: Session 11 logged out. Waiting for processes to exit. Jan 16 18:05:49.084447 systemd[1]: sshd@9-172.31.22.249:22-4.153.228.146:33000.service: Deactivated successfully. Jan 16 18:05:49.074000 audit[5494]: CRED_DISP pid=5494 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:49.091030 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 18:05:49.093341 kernel: audit: type=1106 audit(1768586749.074:756): pid=5494 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:49.094446 kernel: audit: type=1104 audit(1768586749.074:757): pid=5494 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:49.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.22.249:22-4.153.228.146:33000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:49.098735 systemd-logind[1935]: Removed session 11. Jan 16 18:05:50.737094 containerd[1954]: time="2026-01-16T18:05:50.737027812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 18:05:51.046611 containerd[1954]: time="2026-01-16T18:05:51.046287181Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:51.048590 containerd[1954]: time="2026-01-16T18:05:51.048528049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 18:05:51.048720 containerd[1954]: time="2026-01-16T18:05:51.048646405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:51.049028 kubelet[3428]: E0116 18:05:51.048960 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:05:51.049686 kubelet[3428]: E0116 18:05:51.049033 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:05:51.049686 kubelet[3428]: E0116 18:05:51.049246 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kk276,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-t9x2h_calico-system(3932bc10-72fd-4993-add3-bcb26a36ba2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:51.051004 kubelet[3428]: E0116 18:05:51.050936 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t9x2h" podUID="3932bc10-72fd-4993-add3-bcb26a36ba2d" Jan 16 18:05:51.741781 containerd[1954]: time="2026-01-16T18:05:51.740663249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 18:05:52.028736 containerd[1954]: time="2026-01-16T18:05:52.028575590Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:52.030900 containerd[1954]: time="2026-01-16T18:05:52.030831878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 18:05:52.030998 containerd[1954]: time="2026-01-16T18:05:52.030950486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:52.031332 kubelet[3428]: E0116 18:05:52.031285 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:05:52.031492 kubelet[3428]: E0116 18:05:52.031464 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:05:52.031846 kubelet[3428]: E0116 18:05:52.031758 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4p5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-88slq_calico-system(74813863-8ca6-40d9-bd92-5b37511fc2e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:52.036236 containerd[1954]: time="2026-01-16T18:05:52.036163670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 18:05:52.307963 containerd[1954]: time="2026-01-16T18:05:52.307759287Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:52.310640 containerd[1954]: time="2026-01-16T18:05:52.310551651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 18:05:52.310867 containerd[1954]: time="2026-01-16T18:05:52.310605027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:52.311141 kubelet[3428]: E0116 18:05:52.311047 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:05:52.311652 kubelet[3428]: E0116 18:05:52.311171 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:05:52.312058 kubelet[3428]: E0116 18:05:52.311944 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4p5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-88slq_calico-system(74813863-8ca6-40d9-bd92-5b37511fc2e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:52.313465 kubelet[3428]: E0116 18:05:52.313370 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-88slq" podUID="74813863-8ca6-40d9-bd92-5b37511fc2e0" Jan 16 18:05:52.737153 containerd[1954]: time="2026-01-16T18:05:52.735895830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:05:53.062825 containerd[1954]: time="2026-01-16T18:05:53.062499759Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:53.064763 containerd[1954]: time="2026-01-16T18:05:53.064699407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:05:53.064912 containerd[1954]: time="2026-01-16T18:05:53.064819215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:53.065206 kubelet[3428]: E0116 18:05:53.065082 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:05:53.065573 kubelet[3428]: E0116 18:05:53.065222 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:05:53.065573 kubelet[3428]: E0116 18:05:53.065443 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkph5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54df9c5477-js6cv_calico-apiserver(20a749f5-b28f-4523-9135-e8877a359519): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:53.066744 kubelet[3428]: E0116 18:05:53.066670 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-js6cv" podUID="20a749f5-b28f-4523-9135-e8877a359519" Jan 16 18:05:54.168673 systemd[1]: Started sshd@10-172.31.22.249:22-4.153.228.146:33012.service - OpenSSH per-connection server daemon (4.153.228.146:33012). Jan 16 18:05:54.176621 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:05:54.176745 kernel: audit: type=1130 audit(1768586754.168:759): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.22.249:22-4.153.228.146:33012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:54.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.22.249:22-4.153.228.146:33012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:54.664000 audit[5514]: USER_ACCT pid=5514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:54.667347 sshd[5514]: Accepted publickey for core from 4.153.228.146 port 33012 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:05:54.673292 kernel: audit: type=1101 audit(1768586754.664:760): pid=5514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:54.672000 audit[5514]: CRED_ACQ pid=5514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:54.675421 sshd-session[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:05:54.682993 kernel: audit: type=1103 audit(1768586754.672:761): pid=5514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:54.683101 kernel: audit: type=1006 audit(1768586754.672:762): pid=5514 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 16 18:05:54.672000 audit[5514]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe79c1b10 a2=3 a3=0 items=0 ppid=1 pid=5514 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:54.690234 kernel: audit: type=1300 audit(1768586754.672:762): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe79c1b10 a2=3 a3=0 items=0 ppid=1 pid=5514 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:54.672000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:05:54.693072 kernel: audit: type=1327 audit(1768586754.672:762): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:05:54.696288 systemd-logind[1935]: New session 12 of user core. Jan 16 18:05:54.702472 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 18:05:54.708000 audit[5514]: USER_START pid=5514 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:54.717000 audit[5524]: CRED_ACQ pid=5524 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:54.724294 kernel: audit: type=1105 audit(1768586754.708:763): pid=5514 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:54.724399 kernel: audit: type=1103 audit(1768586754.717:764): pid=5524 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:54.736862 containerd[1954]: time="2026-01-16T18:05:54.736543987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:05:55.049159 containerd[1954]: time="2026-01-16T18:05:55.048972509Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:55.051355 containerd[1954]: time="2026-01-16T18:05:55.051284897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:05:55.051494 containerd[1954]: time="2026-01-16T18:05:55.051402509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:55.051768 kubelet[3428]: E0116 18:05:55.051719 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:05:55.053482 kubelet[3428]: E0116 18:05:55.052245 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:05:55.053769 kubelet[3428]: E0116 18:05:55.053585 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rbr88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54df9c5477-tm29z_calico-apiserver(8adc081a-39d6-4153-ae00-f3df7e2ba175): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:55.055068 kubelet[3428]: E0116 18:05:55.055006 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-tm29z" podUID="8adc081a-39d6-4153-ae00-f3df7e2ba175" Jan 16 18:05:55.100166 sshd[5524]: Connection closed by 4.153.228.146 port 33012 Jan 16 18:05:55.100000 sshd-session[5514]: pam_unix(sshd:session): session closed for user core Jan 16 18:05:55.103000 audit[5514]: USER_END pid=5514 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:55.111294 systemd[1]: sshd@10-172.31.22.249:22-4.153.228.146:33012.service: Deactivated successfully. Jan 16 18:05:55.103000 audit[5514]: CRED_DISP pid=5514 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:55.117404 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 18:05:55.118168 kernel: audit: type=1106 audit(1768586755.103:765): pid=5514 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:55.118374 kernel: audit: type=1104 audit(1768586755.103:766): pid=5514 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:05:55.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.22.249:22-4.153.228.146:33012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:55.123346 systemd-logind[1935]: Session 12 logged out. Waiting for processes to exit. Jan 16 18:05:55.125197 systemd-logind[1935]: Removed session 12. Jan 16 18:05:59.736770 kubelet[3428]: E0116 18:05:59.736055 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76db965947-cmgcq" podUID="d70fefd9-029e-4ddf-8559-5e71028d4fd0" Jan 16 18:06:00.202171 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:06:00.202264 kernel: audit: type=1130 audit(1768586760.199:768): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.22.249:22-4.153.228.146:45924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:00.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.22.249:22-4.153.228.146:45924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:00.200021 systemd[1]: Started sshd@11-172.31.22.249:22-4.153.228.146:45924.service - OpenSSH per-connection server daemon (4.153.228.146:45924). Jan 16 18:06:00.707000 audit[5542]: USER_ACCT pid=5542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:00.709230 sshd[5542]: Accepted publickey for core from 4.153.228.146 port 45924 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:06:00.715000 audit[5542]: CRED_ACQ pid=5542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:00.718744 sshd-session[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:00.722522 kernel: audit: type=1101 audit(1768586760.707:769): pid=5542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:00.722632 kernel: audit: type=1103 audit(1768586760.715:770): pid=5542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:00.726220 kernel: audit: type=1006 audit(1768586760.715:771): pid=5542 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 16 18:06:00.715000 audit[5542]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe8841840 a2=3 a3=0 items=0 ppid=1 pid=5542 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:00.732685 kernel: audit: type=1300 audit(1768586760.715:771): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe8841840 a2=3 a3=0 items=0 ppid=1 pid=5542 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:00.715000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:00.735188 kernel: audit: type=1327 audit(1768586760.715:771): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:00.742217 systemd-logind[1935]: New session 13 of user core. Jan 16 18:06:00.756468 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 16 18:06:00.762000 audit[5542]: USER_START pid=5542 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:00.769000 audit[5547]: CRED_ACQ pid=5547 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:00.776757 kernel: audit: type=1105 audit(1768586760.762:772): pid=5542 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:00.776842 kernel: audit: type=1103 audit(1768586760.769:773): pid=5547 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:01.159606 sshd[5547]: Connection closed by 4.153.228.146 port 45924 Jan 16 18:06:01.161386 sshd-session[5542]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:01.163000 audit[5542]: USER_END pid=5542 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:01.169805 systemd[1]: sshd@11-172.31.22.249:22-4.153.228.146:45924.service: Deactivated successfully. Jan 16 18:06:01.163000 audit[5542]: CRED_DISP pid=5542 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:01.179881 kernel: audit: type=1106 audit(1768586761.163:774): pid=5542 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:01.179987 kernel: audit: type=1104 audit(1768586761.163:775): pid=5542 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:01.175560 systemd[1]: session-13.scope: Deactivated successfully. Jan 16 18:06:01.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.22.249:22-4.153.228.146:45924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:01.183663 systemd-logind[1935]: Session 13 logged out. Waiting for processes to exit. Jan 16 18:06:01.186302 systemd-logind[1935]: Removed session 13. Jan 16 18:06:01.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.22.249:22-4.153.228.146:45940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:01.270216 systemd[1]: Started sshd@12-172.31.22.249:22-4.153.228.146:45940.service - OpenSSH per-connection server daemon (4.153.228.146:45940). Jan 16 18:06:01.738230 kubelet[3428]: E0116 18:06:01.737877 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c9648cbf-qbkfw" podUID="170a3e0a-b54a-4909-a38e-9cdfe9da4171" Jan 16 18:06:01.763000 audit[5560]: USER_ACCT pid=5560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:01.769643 sshd[5560]: Accepted publickey for core from 4.153.228.146 port 45940 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:06:01.772000 audit[5560]: CRED_ACQ pid=5560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:01.774000 audit[5560]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc7876e00 a2=3 a3=0 items=0 ppid=1 pid=5560 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:01.774000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:01.779031 sshd-session[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:01.793449 systemd-logind[1935]: New session 14 of user core. Jan 16 18:06:01.803435 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 16 18:06:01.808000 audit[5560]: USER_START pid=5560 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:01.812000 audit[5564]: CRED_ACQ pid=5564 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:02.271883 sshd[5564]: Connection closed by 4.153.228.146 port 45940 Jan 16 18:06:02.273045 sshd-session[5560]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:02.276000 audit[5560]: USER_END pid=5560 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:02.276000 audit[5560]: CRED_DISP pid=5560 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:02.284573 systemd[1]: sshd@12-172.31.22.249:22-4.153.228.146:45940.service: Deactivated successfully. Jan 16 18:06:02.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.22.249:22-4.153.228.146:45940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:02.292303 systemd[1]: session-14.scope: Deactivated successfully. Jan 16 18:06:02.297567 systemd-logind[1935]: Session 14 logged out. Waiting for processes to exit. Jan 16 18:06:02.299735 systemd-logind[1935]: Removed session 14. Jan 16 18:06:02.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.22.249:22-4.153.228.146:45946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:02.372590 systemd[1]: Started sshd@13-172.31.22.249:22-4.153.228.146:45946.service - OpenSSH per-connection server daemon (4.153.228.146:45946). Jan 16 18:06:02.873000 audit[5599]: USER_ACCT pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:02.876003 sshd[5599]: Accepted publickey for core from 4.153.228.146 port 45946 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:06:02.875000 audit[5599]: CRED_ACQ pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:02.875000 audit[5599]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff32faa70 a2=3 a3=0 items=0 ppid=1 pid=5599 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:02.875000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:02.878359 sshd-session[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:02.896396 systemd-logind[1935]: New session 15 of user core. Jan 16 18:06:02.900525 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 16 18:06:02.908000 audit[5599]: USER_START pid=5599 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:02.913000 audit[5604]: CRED_ACQ pid=5604 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:03.256580 sshd[5604]: Connection closed by 4.153.228.146 port 45946 Jan 16 18:06:03.257442 sshd-session[5599]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:03.260000 audit[5599]: USER_END pid=5599 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:03.260000 audit[5599]: CRED_DISP pid=5599 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:03.267305 systemd[1]: sshd@13-172.31.22.249:22-4.153.228.146:45946.service: Deactivated successfully. Jan 16 18:06:03.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.22.249:22-4.153.228.146:45946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:03.272707 systemd[1]: session-15.scope: Deactivated successfully. Jan 16 18:06:03.275795 systemd-logind[1935]: Session 15 logged out. Waiting for processes to exit. Jan 16 18:06:03.279005 systemd-logind[1935]: Removed session 15. Jan 16 18:06:04.735772 kubelet[3428]: E0116 18:06:04.735608 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-js6cv" podUID="20a749f5-b28f-4523-9135-e8877a359519" Jan 16 18:06:04.738870 kubelet[3428]: E0116 18:06:04.738098 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-88slq" podUID="74813863-8ca6-40d9-bd92-5b37511fc2e0" Jan 16 18:06:05.738010 kubelet[3428]: E0116 18:06:05.737554 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t9x2h" podUID="3932bc10-72fd-4993-add3-bcb26a36ba2d" Jan 16 18:06:07.738324 kubelet[3428]: E0116 18:06:07.738251 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-tm29z" podUID="8adc081a-39d6-4153-ae00-f3df7e2ba175" Jan 16 18:06:08.357392 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 16 18:06:08.357531 kernel: audit: type=1130 audit(1768586768.354:795): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.22.249:22-4.153.228.146:56746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:08.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.22.249:22-4.153.228.146:56746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:08.355593 systemd[1]: Started sshd@14-172.31.22.249:22-4.153.228.146:56746.service - OpenSSH per-connection server daemon (4.153.228.146:56746). Jan 16 18:06:08.852000 audit[5624]: USER_ACCT pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:08.861308 sshd[5624]: Accepted publickey for core from 4.153.228.146 port 56746 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:06:08.860000 audit[5624]: CRED_ACQ pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:08.868019 kernel: audit: type=1101 audit(1768586768.852:796): pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:08.868164 kernel: audit: type=1103 audit(1768586768.860:797): pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:08.862413 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:08.871792 kernel: audit: type=1006 audit(1768586768.860:798): pid=5624 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 16 18:06:08.860000 audit[5624]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe03f96d0 a2=3 a3=0 items=0 ppid=1 pid=5624 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:08.878569 kernel: audit: type=1300 audit(1768586768.860:798): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe03f96d0 a2=3 a3=0 items=0 ppid=1 pid=5624 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:08.860000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:08.881715 kernel: audit: type=1327 audit(1768586768.860:798): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:08.887567 systemd-logind[1935]: New session 16 of user core. Jan 16 18:06:08.896440 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 16 18:06:08.904000 audit[5624]: USER_START pid=5624 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:08.913557 kernel: audit: type=1105 audit(1768586768.904:799): pid=5624 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:08.913685 kernel: audit: type=1103 audit(1768586768.911:800): pid=5628 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:08.911000 audit[5628]: CRED_ACQ pid=5628 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:09.234044 sshd[5628]: Connection closed by 4.153.228.146 port 56746 Jan 16 18:06:09.234406 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:09.236000 audit[5624]: USER_END pid=5624 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:09.246393 systemd[1]: sshd@14-172.31.22.249:22-4.153.228.146:56746.service: Deactivated successfully. Jan 16 18:06:09.253532 kernel: audit: type=1106 audit(1768586769.236:801): pid=5624 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:09.253656 kernel: audit: type=1104 audit(1768586769.236:802): pid=5624 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:09.236000 audit[5624]: CRED_DISP pid=5624 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:09.250506 systemd[1]: session-16.scope: Deactivated successfully. Jan 16 18:06:09.255948 systemd-logind[1935]: Session 16 logged out. Waiting for processes to exit. Jan 16 18:06:09.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.22.249:22-4.153.228.146:56746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:09.259105 systemd-logind[1935]: Removed session 16. Jan 16 18:06:11.738803 containerd[1954]: time="2026-01-16T18:06:11.737836728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 18:06:11.982312 containerd[1954]: time="2026-01-16T18:06:11.982250533Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:06:11.984660 containerd[1954]: time="2026-01-16T18:06:11.984599773Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 18:06:11.984775 containerd[1954]: time="2026-01-16T18:06:11.984717529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 18:06:11.985026 kubelet[3428]: E0116 18:06:11.984967 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:06:11.986618 kubelet[3428]: E0116 18:06:11.985036 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:06:11.986618 kubelet[3428]: E0116 18:06:11.985253 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hgjq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76db965947-cmgcq_calico-system(d70fefd9-029e-4ddf-8559-5e71028d4fd0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 18:06:11.987101 kubelet[3428]: E0116 18:06:11.986884 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76db965947-cmgcq" podUID="d70fefd9-029e-4ddf-8559-5e71028d4fd0" Jan 16 18:06:13.739052 containerd[1954]: time="2026-01-16T18:06:13.738989498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 18:06:14.005155 containerd[1954]: time="2026-01-16T18:06:14.004956143Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:06:14.007317 containerd[1954]: time="2026-01-16T18:06:14.007233263Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 18:06:14.007691 containerd[1954]: time="2026-01-16T18:06:14.007374551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 18:06:14.007970 kubelet[3428]: E0116 18:06:14.007617 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:06:14.007970 kubelet[3428]: E0116 18:06:14.007677 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:06:14.007970 kubelet[3428]: E0116 18:06:14.007838 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:90e9697f810349ed85f10d175bcdf8e8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g74js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c9648cbf-qbkfw_calico-system(170a3e0a-b54a-4909-a38e-9cdfe9da4171): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 18:06:14.012041 containerd[1954]: time="2026-01-16T18:06:14.011983127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 18:06:14.295794 containerd[1954]: time="2026-01-16T18:06:14.295287301Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:06:14.297605 containerd[1954]: time="2026-01-16T18:06:14.297527521Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 18:06:14.298254 containerd[1954]: time="2026-01-16T18:06:14.297664717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 18:06:14.298380 kubelet[3428]: E0116 18:06:14.298330 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:06:14.298551 kubelet[3428]: E0116 18:06:14.298391 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:06:14.298650 kubelet[3428]: E0116 18:06:14.298544 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g74js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c9648cbf-qbkfw_calico-system(170a3e0a-b54a-4909-a38e-9cdfe9da4171): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 18:06:14.300378 kubelet[3428]: E0116 18:06:14.300303 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c9648cbf-qbkfw" podUID="170a3e0a-b54a-4909-a38e-9cdfe9da4171" Jan 16 18:06:14.330493 systemd[1]: Started sshd@15-172.31.22.249:22-4.153.228.146:56762.service - OpenSSH per-connection server daemon (4.153.228.146:56762). Jan 16 18:06:14.340576 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:06:14.340651 kernel: audit: type=1130 audit(1768586774.329:804): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.22.249:22-4.153.228.146:56762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:14.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.22.249:22-4.153.228.146:56762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:14.838000 audit[5640]: USER_ACCT pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:14.844911 sshd[5640]: Accepted publickey for core from 4.153.228.146 port 56762 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:06:14.846143 kernel: audit: type=1101 audit(1768586774.838:805): pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:14.846000 audit[5640]: CRED_ACQ pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:14.854009 sshd-session[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:14.858066 kernel: audit: type=1103 audit(1768586774.846:806): pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:14.858216 kernel: audit: type=1006 audit(1768586774.852:807): pid=5640 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 16 18:06:14.852000 audit[5640]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd360bf30 a2=3 a3=0 items=0 ppid=1 pid=5640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:14.865292 kernel: audit: type=1300 audit(1768586774.852:807): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd360bf30 a2=3 a3=0 items=0 ppid=1 pid=5640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:14.852000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:14.868080 kernel: audit: type=1327 audit(1768586774.852:807): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:14.881838 systemd-logind[1935]: New session 17 of user core. Jan 16 18:06:14.888574 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 16 18:06:14.925000 audit[5640]: USER_START pid=5640 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:14.935151 kernel: audit: type=1105 audit(1768586774.925:808): pid=5640 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:14.939000 audit[5650]: CRED_ACQ pid=5650 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:14.947185 kernel: audit: type=1103 audit(1768586774.939:809): pid=5650 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:15.260769 sshd[5650]: Connection closed by 4.153.228.146 port 56762 Jan 16 18:06:15.260473 sshd-session[5640]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:15.265000 audit[5640]: USER_END pid=5640 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:15.273577 systemd[1]: sshd@15-172.31.22.249:22-4.153.228.146:56762.service: Deactivated successfully. Jan 16 18:06:15.266000 audit[5640]: CRED_DISP pid=5640 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:15.281351 kernel: audit: type=1106 audit(1768586775.265:810): pid=5640 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:15.281468 kernel: audit: type=1104 audit(1768586775.266:811): pid=5640 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:15.276821 systemd[1]: session-17.scope: Deactivated successfully. Jan 16 18:06:15.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.22.249:22-4.153.228.146:56762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:15.284259 systemd-logind[1935]: Session 17 logged out. Waiting for processes to exit. Jan 16 18:06:15.289059 systemd-logind[1935]: Removed session 17. Jan 16 18:06:16.738507 containerd[1954]: time="2026-01-16T18:06:16.737808533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 18:06:17.013413 containerd[1954]: time="2026-01-16T18:06:17.012906818Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:06:17.015822 containerd[1954]: time="2026-01-16T18:06:17.015696458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 18:06:17.015822 containerd[1954]: time="2026-01-16T18:06:17.015771470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 18:06:17.016243 kubelet[3428]: E0116 18:06:17.016182 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:06:17.016868 kubelet[3428]: E0116 18:06:17.016256 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:06:17.016868 kubelet[3428]: E0116 18:06:17.016427 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4p5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-88slq_calico-system(74813863-8ca6-40d9-bd92-5b37511fc2e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 18:06:17.019484 containerd[1954]: time="2026-01-16T18:06:17.019431158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 18:06:17.363369 containerd[1954]: time="2026-01-16T18:06:17.363289756Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:06:17.366130 containerd[1954]: time="2026-01-16T18:06:17.366062272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 18:06:17.366241 containerd[1954]: time="2026-01-16T18:06:17.366198700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 18:06:17.366576 kubelet[3428]: E0116 18:06:17.366516 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:06:17.366699 kubelet[3428]: E0116 18:06:17.366587 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:06:17.366867 kubelet[3428]: E0116 18:06:17.366781 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4p5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-88slq_calico-system(74813863-8ca6-40d9-bd92-5b37511fc2e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 18:06:17.368287 kubelet[3428]: E0116 18:06:17.368206 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-88slq" podUID="74813863-8ca6-40d9-bd92-5b37511fc2e0" Jan 16 18:06:17.739297 containerd[1954]: time="2026-01-16T18:06:17.738518718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:06:18.029398 containerd[1954]: time="2026-01-16T18:06:18.029044911Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:06:18.031809 containerd[1954]: time="2026-01-16T18:06:18.031675827Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:06:18.031809 containerd[1954]: time="2026-01-16T18:06:18.031749363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:06:18.032096 kubelet[3428]: E0116 18:06:18.031962 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:06:18.032096 kubelet[3428]: E0116 18:06:18.032020 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:06:18.032676 kubelet[3428]: E0116 18:06:18.032229 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkph5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54df9c5477-js6cv_calico-apiserver(20a749f5-b28f-4523-9135-e8877a359519): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:06:18.034239 kubelet[3428]: E0116 18:06:18.034140 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-js6cv" podUID="20a749f5-b28f-4523-9135-e8877a359519" Jan 16 18:06:19.738603 containerd[1954]: time="2026-01-16T18:06:19.738512792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 18:06:20.035694 containerd[1954]: time="2026-01-16T18:06:20.035555309Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:06:20.037791 containerd[1954]: time="2026-01-16T18:06:20.037710509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 18:06:20.037922 containerd[1954]: time="2026-01-16T18:06:20.037836137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 18:06:20.038236 kubelet[3428]: E0116 18:06:20.038167 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:06:20.038733 kubelet[3428]: E0116 18:06:20.038257 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:06:20.040059 kubelet[3428]: E0116 18:06:20.039163 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kk276,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-t9x2h_calico-system(3932bc10-72fd-4993-add3-bcb26a36ba2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 18:06:20.040634 kubelet[3428]: E0116 18:06:20.040575 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t9x2h" podUID="3932bc10-72fd-4993-add3-bcb26a36ba2d" Jan 16 18:06:20.364560 systemd[1]: Started sshd@16-172.31.22.249:22-4.153.228.146:58192.service - OpenSSH per-connection server daemon (4.153.228.146:58192). Jan 16 18:06:20.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.22.249:22-4.153.228.146:58192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:20.366537 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:06:20.366612 kernel: audit: type=1130 audit(1768586780.365:813): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.22.249:22-4.153.228.146:58192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:20.738872 containerd[1954]: time="2026-01-16T18:06:20.738190029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:06:20.873000 audit[5668]: USER_ACCT pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:20.874080 sshd[5668]: Accepted publickey for core from 4.153.228.146 port 58192 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:06:20.881180 kernel: audit: type=1101 audit(1768586780.873:814): pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:20.881000 audit[5668]: CRED_ACQ pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:20.883964 sshd-session[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:20.891392 kernel: audit: type=1103 audit(1768586780.881:815): pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:20.891500 kernel: audit: type=1006 audit(1768586780.881:816): pid=5668 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 16 18:06:20.881000 audit[5668]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcbac6460 a2=3 a3=0 items=0 ppid=1 pid=5668 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:20.898395 kernel: audit: type=1300 audit(1768586780.881:816): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcbac6460 a2=3 a3=0 items=0 ppid=1 pid=5668 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:20.881000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:20.901373 kernel: audit: type=1327 audit(1768586780.881:816): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:20.904361 systemd-logind[1935]: New session 18 of user core. Jan 16 18:06:20.911431 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 16 18:06:20.920000 audit[5668]: USER_START pid=5668 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:20.929312 kernel: audit: type=1105 audit(1768586780.920:817): pid=5668 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:20.929413 kernel: audit: type=1103 audit(1768586780.929:818): pid=5672 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:20.929000 audit[5672]: CRED_ACQ pid=5672 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:21.035864 containerd[1954]: time="2026-01-16T18:06:21.035575710Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:06:21.037954 containerd[1954]: time="2026-01-16T18:06:21.037880190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:06:21.038077 containerd[1954]: time="2026-01-16T18:06:21.037904646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:06:21.038379 kubelet[3428]: E0116 18:06:21.038287 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:06:21.038931 kubelet[3428]: E0116 18:06:21.038381 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:06:21.038931 kubelet[3428]: E0116 18:06:21.038627 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rbr88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54df9c5477-tm29z_calico-apiserver(8adc081a-39d6-4153-ae00-f3df7e2ba175): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:06:21.040513 kubelet[3428]: E0116 18:06:21.040415 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-tm29z" podUID="8adc081a-39d6-4153-ae00-f3df7e2ba175" Jan 16 18:06:21.262182 sshd[5672]: Connection closed by 4.153.228.146 port 58192 Jan 16 18:06:21.263005 sshd-session[5668]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:21.265000 audit[5668]: USER_END pid=5668 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:21.272227 systemd-logind[1935]: Session 18 logged out. Waiting for processes to exit. Jan 16 18:06:21.268000 audit[5668]: CRED_DISP pid=5668 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:21.280272 kernel: audit: type=1106 audit(1768586781.265:819): pid=5668 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:21.280399 kernel: audit: type=1104 audit(1768586781.268:820): pid=5668 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:21.276183 systemd[1]: sshd@16-172.31.22.249:22-4.153.228.146:58192.service: Deactivated successfully. Jan 16 18:06:21.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.22.249:22-4.153.228.146:58192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:21.282627 systemd[1]: session-18.scope: Deactivated successfully. Jan 16 18:06:21.288230 systemd-logind[1935]: Removed session 18. Jan 16 18:06:21.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.22.249:22-4.153.228.146:58198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:21.366237 systemd[1]: Started sshd@17-172.31.22.249:22-4.153.228.146:58198.service - OpenSSH per-connection server daemon (4.153.228.146:58198). Jan 16 18:06:21.862000 audit[5684]: USER_ACCT pid=5684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:21.863478 sshd[5684]: Accepted publickey for core from 4.153.228.146 port 58198 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:06:21.865000 audit[5684]: CRED_ACQ pid=5684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:21.865000 audit[5684]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd817400 a2=3 a3=0 items=0 ppid=1 pid=5684 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:21.865000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:21.867389 sshd-session[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:21.876214 systemd-logind[1935]: New session 19 of user core. Jan 16 18:06:21.887529 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 16 18:06:21.893000 audit[5684]: USER_START pid=5684 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:21.897000 audit[5688]: CRED_ACQ pid=5688 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:23.222402 sshd[5688]: Connection closed by 4.153.228.146 port 58198 Jan 16 18:06:23.223463 sshd-session[5684]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:23.227000 audit[5684]: USER_END pid=5684 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:23.228000 audit[5684]: CRED_DISP pid=5684 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:23.233686 systemd[1]: sshd@17-172.31.22.249:22-4.153.228.146:58198.service: Deactivated successfully. Jan 16 18:06:23.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.22.249:22-4.153.228.146:58198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:23.239679 systemd[1]: session-19.scope: Deactivated successfully. Jan 16 18:06:23.243818 systemd-logind[1935]: Session 19 logged out. Waiting for processes to exit. Jan 16 18:06:23.246863 systemd-logind[1935]: Removed session 19. Jan 16 18:06:23.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.22.249:22-4.153.228.146:58204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:23.309071 systemd[1]: Started sshd@18-172.31.22.249:22-4.153.228.146:58204.service - OpenSSH per-connection server daemon (4.153.228.146:58204). Jan 16 18:06:23.736002 kubelet[3428]: E0116 18:06:23.735821 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76db965947-cmgcq" podUID="d70fefd9-029e-4ddf-8559-5e71028d4fd0" Jan 16 18:06:23.783000 audit[5698]: USER_ACCT pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:23.784188 sshd[5698]: Accepted publickey for core from 4.153.228.146 port 58204 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:06:23.785000 audit[5698]: CRED_ACQ pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:23.785000 audit[5698]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd038f810 a2=3 a3=0 items=0 ppid=1 pid=5698 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:23.785000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:23.787541 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:23.799341 systemd-logind[1935]: New session 20 of user core. Jan 16 18:06:23.818438 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 16 18:06:23.824000 audit[5698]: USER_START pid=5698 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:23.827000 audit[5702]: CRED_ACQ pid=5702 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:24.856000 audit[5715]: NETFILTER_CFG table=filter:148 family=2 entries=14 op=nft_register_rule pid=5715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:06:24.856000 audit[5715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffda487d50 a2=0 a3=1 items=0 ppid=3574 pid=5715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:24.856000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:06:24.885000 audit[5715]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:06:24.885000 audit[5715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffda487d50 a2=0 a3=1 items=0 ppid=3574 pid=5715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:24.885000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:06:24.942773 sshd[5702]: Connection closed by 4.153.228.146 port 58204 Jan 16 18:06:24.946990 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:24.953000 audit[5698]: USER_END pid=5698 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:24.953000 audit[5698]: CRED_DISP pid=5698 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:24.959099 systemd[1]: sshd@18-172.31.22.249:22-4.153.228.146:58204.service: Deactivated successfully. Jan 16 18:06:24.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.22.249:22-4.153.228.146:58204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:24.966289 systemd[1]: session-20.scope: Deactivated successfully. Jan 16 18:06:24.972750 systemd-logind[1935]: Session 20 logged out. Waiting for processes to exit. Jan 16 18:06:24.979546 systemd-logind[1935]: Removed session 20. Jan 16 18:06:25.039279 systemd[1]: Started sshd@19-172.31.22.249:22-4.153.228.146:56284.service - OpenSSH per-connection server daemon (4.153.228.146:56284). Jan 16 18:06:25.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.22.249:22-4.153.228.146:56284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:25.120000 audit[5724]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=5724 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:06:25.120000 audit[5724]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffc0c6eae0 a2=0 a3=1 items=0 ppid=3574 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:25.120000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:06:25.135000 audit[5724]: NETFILTER_CFG table=nat:151 family=2 entries=20 op=nft_register_rule pid=5724 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:06:25.135000 audit[5724]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc0c6eae0 a2=0 a3=1 items=0 ppid=3574 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:25.135000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:06:25.521000 audit[5721]: USER_ACCT pid=5721 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:25.523359 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 16 18:06:25.523450 kernel: audit: type=1101 audit(1768586785.521:845): pid=5721 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:25.524420 sshd[5721]: Accepted publickey for core from 4.153.228.146 port 56284 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:06:25.529000 audit[5721]: CRED_ACQ pid=5721 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:25.531721 sshd-session[5721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:25.535776 kernel: audit: type=1103 audit(1768586785.529:846): pid=5721 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:25.540897 kernel: audit: type=1006 audit(1768586785.529:847): pid=5721 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 16 18:06:25.529000 audit[5721]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff4019200 a2=3 a3=0 items=0 ppid=1 pid=5721 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:25.547562 kernel: audit: type=1300 audit(1768586785.529:847): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff4019200 a2=3 a3=0 items=0 ppid=1 pid=5721 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:25.529000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:25.550451 kernel: audit: type=1327 audit(1768586785.529:847): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:25.556216 systemd-logind[1935]: New session 21 of user core. Jan 16 18:06:25.562503 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 16 18:06:25.574000 audit[5721]: USER_START pid=5721 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:25.582000 audit[5726]: CRED_ACQ pid=5726 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:25.589632 kernel: audit: type=1105 audit(1768586785.574:848): pid=5721 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:25.589748 kernel: audit: type=1103 audit(1768586785.582:849): pid=5726 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:26.132596 sshd[5726]: Connection closed by 4.153.228.146 port 56284 Jan 16 18:06:26.133475 sshd-session[5721]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:26.137000 audit[5721]: USER_END pid=5721 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:26.137000 audit[5721]: CRED_DISP pid=5721 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:26.148033 systemd[1]: sshd@19-172.31.22.249:22-4.153.228.146:56284.service: Deactivated successfully. Jan 16 18:06:26.152669 kernel: audit: type=1106 audit(1768586786.137:850): pid=5721 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:26.152798 kernel: audit: type=1104 audit(1768586786.137:851): pid=5721 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:26.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.22.249:22-4.153.228.146:56284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:26.155097 systemd[1]: session-21.scope: Deactivated successfully. Jan 16 18:06:26.158513 kernel: audit: type=1131 audit(1768586786.147:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.22.249:22-4.153.228.146:56284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:26.162720 systemd-logind[1935]: Session 21 logged out. Waiting for processes to exit. Jan 16 18:06:26.165211 systemd-logind[1935]: Removed session 21. Jan 16 18:06:26.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.22.249:22-4.153.228.146:56298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:26.230488 systemd[1]: Started sshd@20-172.31.22.249:22-4.153.228.146:56298.service - OpenSSH per-connection server daemon (4.153.228.146:56298). Jan 16 18:06:26.700000 audit[5736]: USER_ACCT pid=5736 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:26.700797 sshd[5736]: Accepted publickey for core from 4.153.228.146 port 56298 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:06:26.702000 audit[5736]: CRED_ACQ pid=5736 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:26.703000 audit[5736]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6d9a890 a2=3 a3=0 items=0 ppid=1 pid=5736 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:26.703000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:26.704695 sshd-session[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:26.712920 systemd-logind[1935]: New session 22 of user core. Jan 16 18:06:26.722443 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 16 18:06:26.729000 audit[5736]: USER_START pid=5736 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:26.732000 audit[5740]: CRED_ACQ pid=5740 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:27.062315 sshd[5740]: Connection closed by 4.153.228.146 port 56298 Jan 16 18:06:27.062058 sshd-session[5736]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:27.065000 audit[5736]: USER_END pid=5736 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:27.065000 audit[5736]: CRED_DISP pid=5736 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:27.069965 systemd[1]: sshd@20-172.31.22.249:22-4.153.228.146:56298.service: Deactivated successfully. Jan 16 18:06:27.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.22.249:22-4.153.228.146:56298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:27.076295 systemd[1]: session-22.scope: Deactivated successfully. Jan 16 18:06:27.085865 systemd-logind[1935]: Session 22 logged out. Waiting for processes to exit. Jan 16 18:06:27.089056 systemd-logind[1935]: Removed session 22. Jan 16 18:06:27.737847 kubelet[3428]: E0116 18:06:27.737721 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c9648cbf-qbkfw" podUID="170a3e0a-b54a-4909-a38e-9cdfe9da4171" Jan 16 18:06:30.738141 kubelet[3428]: E0116 18:06:30.738038 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-88slq" podUID="74813863-8ca6-40d9-bd92-5b37511fc2e0" Jan 16 18:06:30.738798 kubelet[3428]: E0116 18:06:30.738459 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-js6cv" podUID="20a749f5-b28f-4523-9135-e8877a359519" Jan 16 18:06:31.740434 kubelet[3428]: E0116 18:06:31.740351 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t9x2h" podUID="3932bc10-72fd-4993-add3-bcb26a36ba2d" Jan 16 18:06:32.171207 systemd[1]: Started sshd@21-172.31.22.249:22-4.153.228.146:56302.service - OpenSSH per-connection server daemon (4.153.228.146:56302). Jan 16 18:06:32.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.22.249:22-4.153.228.146:56302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:32.175146 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 16 18:06:32.175250 kernel: audit: type=1130 audit(1768586792.171:862): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.22.249:22-4.153.228.146:56302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:32.620000 audit[5782]: NETFILTER_CFG table=filter:152 family=2 entries=26 op=nft_register_rule pid=5782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:06:32.620000 audit[5782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff20eca70 a2=0 a3=1 items=0 ppid=3574 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:32.631443 kernel: audit: type=1325 audit(1768586792.620:863): table=filter:152 family=2 entries=26 op=nft_register_rule pid=5782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:06:32.631546 kernel: audit: type=1300 audit(1768586792.620:863): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff20eca70 a2=0 a3=1 items=0 ppid=3574 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:32.631772 kernel: audit: type=1327 audit(1768586792.620:863): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:06:32.620000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:06:32.638000 audit[5782]: NETFILTER_CFG table=nat:153 family=2 entries=104 op=nft_register_chain pid=5782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:06:32.638000 audit[5782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffff20eca70 a2=0 a3=1 items=0 ppid=3574 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:32.649868 kernel: audit: type=1325 audit(1768586792.638:864): table=nat:153 family=2 entries=104 op=nft_register_chain pid=5782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:06:32.649950 kernel: audit: type=1300 audit(1768586792.638:864): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffff20eca70 a2=0 a3=1 items=0 ppid=3574 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:32.649997 kernel: audit: type=1327 audit(1768586792.638:864): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:06:32.638000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:06:32.681000 audit[5777]: USER_ACCT pid=5777 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:32.682468 sshd[5777]: Accepted publickey for core from 4.153.228.146 port 56302 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:06:32.689281 kernel: audit: type=1101 audit(1768586792.681:865): pid=5777 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:32.690000 audit[5777]: CRED_ACQ pid=5777 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:32.692059 sshd-session[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:32.699218 kernel: audit: type=1103 audit(1768586792.690:866): pid=5777 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:32.699341 kernel: audit: type=1006 audit(1768586792.690:867): pid=5777 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 16 18:06:32.690000 audit[5777]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffec516690 a2=3 a3=0 items=0 ppid=1 pid=5777 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:32.690000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:32.708544 systemd-logind[1935]: New session 23 of user core. Jan 16 18:06:32.716412 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 16 18:06:32.723000 audit[5777]: USER_START pid=5777 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:32.726000 audit[5784]: CRED_ACQ pid=5784 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:32.736702 kubelet[3428]: E0116 18:06:32.736631 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-tm29z" podUID="8adc081a-39d6-4153-ae00-f3df7e2ba175" Jan 16 18:06:33.072154 sshd[5784]: Connection closed by 4.153.228.146 port 56302 Jan 16 18:06:33.073305 sshd-session[5777]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:33.077000 audit[5777]: USER_END pid=5777 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:33.078000 audit[5777]: CRED_DISP pid=5777 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:33.084561 systemd[1]: sshd@21-172.31.22.249:22-4.153.228.146:56302.service: Deactivated successfully. Jan 16 18:06:33.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.22.249:22-4.153.228.146:56302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:33.092735 systemd[1]: session-23.scope: Deactivated successfully. Jan 16 18:06:33.099750 systemd-logind[1935]: Session 23 logged out. Waiting for processes to exit. Jan 16 18:06:33.103998 systemd-logind[1935]: Removed session 23. Jan 16 18:06:38.161685 systemd[1]: Started sshd@22-172.31.22.249:22-4.153.228.146:41628.service - OpenSSH per-connection server daemon (4.153.228.146:41628). Jan 16 18:06:38.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.22.249:22-4.153.228.146:41628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:38.166509 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 16 18:06:38.166712 kernel: audit: type=1130 audit(1768586798.161:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.22.249:22-4.153.228.146:41628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:38.639000 audit[5796]: USER_ACCT pid=5796 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:38.645108 sshd[5796]: Accepted publickey for core from 4.153.228.146 port 41628 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:06:38.647220 kernel: audit: type=1101 audit(1768586798.639:874): pid=5796 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:38.647000 audit[5796]: CRED_ACQ pid=5796 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:38.650525 sshd-session[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:38.657806 kernel: audit: type=1103 audit(1768586798.647:875): pid=5796 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:38.657920 kernel: audit: type=1006 audit(1768586798.648:876): pid=5796 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 16 18:06:38.657967 kernel: audit: type=1300 audit(1768586798.648:876): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd497970 a2=3 a3=0 items=0 ppid=1 pid=5796 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:38.648000 audit[5796]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd497970 a2=3 a3=0 items=0 ppid=1 pid=5796 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:38.664110 systemd-logind[1935]: New session 24 of user core. Jan 16 18:06:38.648000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:38.667280 kernel: audit: type=1327 audit(1768586798.648:876): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:38.669496 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 16 18:06:38.675000 audit[5796]: USER_START pid=5796 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:38.684000 audit[5800]: CRED_ACQ pid=5800 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:38.690198 kernel: audit: type=1105 audit(1768586798.675:877): pid=5796 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:38.690305 kernel: audit: type=1103 audit(1768586798.684:878): pid=5800 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:38.737425 kubelet[3428]: E0116 18:06:38.737251 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76db965947-cmgcq" podUID="d70fefd9-029e-4ddf-8559-5e71028d4fd0" Jan 16 18:06:39.002533 sshd[5800]: Connection closed by 4.153.228.146 port 41628 Jan 16 18:06:39.003615 sshd-session[5796]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:39.006000 audit[5796]: USER_END pid=5796 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:39.021144 kernel: audit: type=1106 audit(1768586799.006:879): pid=5796 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:39.021260 kernel: audit: type=1104 audit(1768586799.006:880): pid=5796 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:39.006000 audit[5796]: CRED_DISP pid=5796 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:39.015631 systemd[1]: sshd@22-172.31.22.249:22-4.153.228.146:41628.service: Deactivated successfully. Jan 16 18:06:39.019539 systemd[1]: session-24.scope: Deactivated successfully. Jan 16 18:06:39.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.22.249:22-4.153.228.146:41628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:39.022394 systemd-logind[1935]: Session 24 logged out. Waiting for processes to exit. Jan 16 18:06:39.027462 systemd-logind[1935]: Removed session 24. Jan 16 18:06:42.737423 kubelet[3428]: E0116 18:06:42.737348 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-js6cv" podUID="20a749f5-b28f-4523-9135-e8877a359519" Jan 16 18:06:42.739208 kubelet[3428]: E0116 18:06:42.737773 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c9648cbf-qbkfw" podUID="170a3e0a-b54a-4909-a38e-9cdfe9da4171" Jan 16 18:06:43.742630 kubelet[3428]: E0116 18:06:43.742480 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-88slq" podUID="74813863-8ca6-40d9-bd92-5b37511fc2e0" Jan 16 18:06:44.098184 systemd[1]: Started sshd@23-172.31.22.249:22-4.153.228.146:41632.service - OpenSSH per-connection server daemon (4.153.228.146:41632). Jan 16 18:06:44.106447 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:06:44.106582 kernel: audit: type=1130 audit(1768586804.099:882): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.22.249:22-4.153.228.146:41632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:44.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.22.249:22-4.153.228.146:41632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:44.581000 audit[5814]: USER_ACCT pid=5814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:44.588310 sshd[5814]: Accepted publickey for core from 4.153.228.146 port 41632 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:06:44.591793 sshd-session[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:44.589000 audit[5814]: CRED_ACQ pid=5814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:44.599867 kernel: audit: type=1101 audit(1768586804.581:883): pid=5814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:44.599995 kernel: audit: type=1103 audit(1768586804.589:884): pid=5814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:44.603879 kernel: audit: type=1006 audit(1768586804.589:885): pid=5814 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 16 18:06:44.589000 audit[5814]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe65619f0 a2=3 a3=0 items=0 ppid=1 pid=5814 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:44.611543 kernel: audit: type=1300 audit(1768586804.589:885): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe65619f0 a2=3 a3=0 items=0 ppid=1 pid=5814 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:44.589000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:44.616251 kernel: audit: type=1327 audit(1768586804.589:885): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:44.623243 systemd-logind[1935]: New session 25 of user core. Jan 16 18:06:44.629818 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 16 18:06:44.640000 audit[5814]: USER_START pid=5814 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:44.652286 kernel: audit: type=1105 audit(1768586804.640:886): pid=5814 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:44.652672 kernel: audit: type=1103 audit(1768586804.651:887): pid=5818 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:44.651000 audit[5818]: CRED_ACQ pid=5818 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:45.001924 sshd[5818]: Connection closed by 4.153.228.146 port 41632 Jan 16 18:06:45.003401 sshd-session[5814]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:45.007000 audit[5814]: USER_END pid=5814 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:45.018858 systemd[1]: sshd@23-172.31.22.249:22-4.153.228.146:41632.service: Deactivated successfully. Jan 16 18:06:45.023139 systemd[1]: session-25.scope: Deactivated successfully. Jan 16 18:06:45.008000 audit[5814]: CRED_DISP pid=5814 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:45.027583 systemd-logind[1935]: Session 25 logged out. Waiting for processes to exit. Jan 16 18:06:45.033429 kernel: audit: type=1106 audit(1768586805.007:888): pid=5814 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:45.033559 kernel: audit: type=1104 audit(1768586805.008:889): pid=5814 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:45.038342 systemd-logind[1935]: Removed session 25. Jan 16 18:06:45.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.22.249:22-4.153.228.146:41632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:46.735567 kubelet[3428]: E0116 18:06:46.735483 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t9x2h" podUID="3932bc10-72fd-4993-add3-bcb26a36ba2d" Jan 16 18:06:47.736936 kubelet[3428]: E0116 18:06:47.736842 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-tm29z" podUID="8adc081a-39d6-4153-ae00-f3df7e2ba175" Jan 16 18:06:50.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.22.249:22-4.153.228.146:59992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:50.115418 systemd[1]: Started sshd@24-172.31.22.249:22-4.153.228.146:59992.service - OpenSSH per-connection server daemon (4.153.228.146:59992). Jan 16 18:06:50.120315 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:06:50.120673 kernel: audit: type=1130 audit(1768586810.114:891): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.22.249:22-4.153.228.146:59992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:50.661000 audit[5833]: USER_ACCT pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:50.663586 sshd[5833]: Accepted publickey for core from 4.153.228.146 port 59992 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:06:50.670000 audit[5833]: CRED_ACQ pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:50.680362 kernel: audit: type=1101 audit(1768586810.661:892): pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:50.680535 kernel: audit: type=1103 audit(1768586810.670:893): pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:50.681098 sshd-session[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:50.700268 kernel: audit: type=1006 audit(1768586810.670:894): pid=5833 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 16 18:06:50.700428 kernel: audit: type=1300 audit(1768586810.670:894): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff373b1d0 a2=3 a3=0 items=0 ppid=1 pid=5833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:50.670000 audit[5833]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff373b1d0 a2=3 a3=0 items=0 ppid=1 pid=5833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:50.707603 systemd-logind[1935]: New session 26 of user core. Jan 16 18:06:50.715171 kernel: audit: type=1327 audit(1768586810.670:894): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:50.670000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:50.716278 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 16 18:06:50.728000 audit[5833]: USER_START pid=5833 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:50.738000 audit[5838]: CRED_ACQ pid=5838 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:50.746165 kernel: audit: type=1105 audit(1768586810.728:895): pid=5833 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:50.746261 kernel: audit: type=1103 audit(1768586810.738:896): pid=5838 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:51.093349 sshd[5838]: Connection closed by 4.153.228.146 port 59992 Jan 16 18:06:51.096782 sshd-session[5833]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:51.101000 audit[5833]: USER_END pid=5833 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:51.113023 systemd[1]: sshd@24-172.31.22.249:22-4.153.228.146:59992.service: Deactivated successfully. Jan 16 18:06:51.123677 kernel: audit: type=1106 audit(1768586811.101:897): pid=5833 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:51.123816 kernel: audit: type=1104 audit(1768586811.102:898): pid=5833 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:51.102000 audit[5833]: CRED_DISP pid=5833 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:51.125480 systemd[1]: session-26.scope: Deactivated successfully. Jan 16 18:06:51.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.22.249:22-4.153.228.146:59992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:51.128934 systemd-logind[1935]: Session 26 logged out. Waiting for processes to exit. Jan 16 18:06:51.135234 systemd-logind[1935]: Removed session 26. Jan 16 18:06:52.738430 containerd[1954]: time="2026-01-16T18:06:52.737830444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 18:06:53.027457 containerd[1954]: time="2026-01-16T18:06:53.026879329Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:06:53.029533 containerd[1954]: time="2026-01-16T18:06:53.029315557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 18:06:53.029533 containerd[1954]: time="2026-01-16T18:06:53.029449981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 18:06:53.031257 kubelet[3428]: E0116 18:06:53.030285 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:06:53.031257 kubelet[3428]: E0116 18:06:53.030356 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:06:53.031257 kubelet[3428]: E0116 18:06:53.030571 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hgjq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76db965947-cmgcq_calico-system(d70fefd9-029e-4ddf-8559-5e71028d4fd0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 18:06:53.033446 kubelet[3428]: E0116 18:06:53.032345 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76db965947-cmgcq" podUID="d70fefd9-029e-4ddf-8559-5e71028d4fd0" Jan 16 18:06:53.740722 kubelet[3428]: E0116 18:06:53.740647 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c9648cbf-qbkfw" podUID="170a3e0a-b54a-4909-a38e-9cdfe9da4171" Jan 16 18:06:55.737545 kubelet[3428]: E0116 18:06:55.737482 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-js6cv" podUID="20a749f5-b28f-4523-9135-e8877a359519" Jan 16 18:06:56.177928 systemd[1]: Started sshd@25-172.31.22.249:22-4.153.228.146:49958.service - OpenSSH per-connection server daemon (4.153.228.146:49958). Jan 16 18:06:56.180958 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:06:56.181180 kernel: audit: type=1130 audit(1768586816.177:900): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.22.249:22-4.153.228.146:49958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:56.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.22.249:22-4.153.228.146:49958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:56.672763 sshd[5856]: Accepted publickey for core from 4.153.228.146 port 49958 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:06:56.671000 audit[5856]: USER_ACCT pid=5856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:56.682720 sshd-session[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:56.679000 audit[5856]: CRED_ACQ pid=5856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:56.688870 kernel: audit: type=1101 audit(1768586816.671:901): pid=5856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:56.689033 kernel: audit: type=1103 audit(1768586816.679:902): pid=5856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:56.692984 kernel: audit: type=1006 audit(1768586816.679:903): pid=5856 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 16 18:06:56.679000 audit[5856]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff9b62c80 a2=3 a3=0 items=0 ppid=1 pid=5856 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:56.701770 kernel: audit: type=1300 audit(1768586816.679:903): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff9b62c80 a2=3 a3=0 items=0 ppid=1 pid=5856 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:56.679000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:56.704555 kernel: audit: type=1327 audit(1768586816.679:903): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:56.712528 systemd-logind[1935]: New session 27 of user core. Jan 16 18:06:56.717451 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 16 18:06:56.725000 audit[5856]: USER_START pid=5856 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:56.741709 kubelet[3428]: E0116 18:06:56.741627 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-88slq" podUID="74813863-8ca6-40d9-bd92-5b37511fc2e0" Jan 16 18:06:56.744886 kernel: audit: type=1105 audit(1768586816.725:904): pid=5856 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:56.745465 kernel: audit: type=1103 audit(1768586816.741:905): pid=5860 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:56.741000 audit[5860]: CRED_ACQ pid=5860 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:57.064382 sshd[5860]: Connection closed by 4.153.228.146 port 49958 Jan 16 18:06:57.065393 sshd-session[5856]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:57.069000 audit[5856]: USER_END pid=5856 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:57.083275 systemd[1]: sshd@25-172.31.22.249:22-4.153.228.146:49958.service: Deactivated successfully. Jan 16 18:06:57.078000 audit[5856]: CRED_DISP pid=5856 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:57.094753 kernel: audit: type=1106 audit(1768586817.069:906): pid=5856 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:57.094900 kernel: audit: type=1104 audit(1768586817.078:907): pid=5856 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:06:57.096173 systemd[1]: session-27.scope: Deactivated successfully. Jan 16 18:06:57.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.22.249:22-4.153.228.146:49958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:57.103218 systemd-logind[1935]: Session 27 logged out. Waiting for processes to exit. Jan 16 18:06:57.109561 systemd-logind[1935]: Removed session 27. Jan 16 18:07:00.737155 containerd[1954]: time="2026-01-16T18:07:00.736924127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 18:07:01.029165 containerd[1954]: time="2026-01-16T18:07:01.028982745Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:07:01.032346 containerd[1954]: time="2026-01-16T18:07:01.031688709Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 18:07:01.032720 containerd[1954]: time="2026-01-16T18:07:01.032190201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 18:07:01.033348 kubelet[3428]: E0116 18:07:01.033172 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:07:01.033348 kubelet[3428]: E0116 18:07:01.033241 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:07:01.035737 kubelet[3428]: E0116 18:07:01.035419 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kk276,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-t9x2h_calico-system(3932bc10-72fd-4993-add3-bcb26a36ba2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 18:07:01.036750 kubelet[3428]: E0116 18:07:01.036671 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t9x2h" podUID="3932bc10-72fd-4993-add3-bcb26a36ba2d" Jan 16 18:07:02.169489 systemd[1]: Started sshd@26-172.31.22.249:22-4.153.228.146:49960.service - OpenSSH per-connection server daemon (4.153.228.146:49960). Jan 16 18:07:02.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.22.249:22-4.153.228.146:49960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:07:02.173329 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:07:02.173425 kernel: audit: type=1130 audit(1768586822.168:909): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.22.249:22-4.153.228.146:49960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:07:02.680000 audit[5897]: USER_ACCT pid=5897 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:07:02.688631 sshd[5897]: Accepted publickey for core from 4.153.228.146 port 49960 ssh2: RSA SHA256:XlToc3BTDvJ+35oYy46Yvm2YUKsK8zQCAt87CvovqoA Jan 16 18:07:02.693511 sshd-session[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:07:02.702270 kernel: audit: type=1101 audit(1768586822.680:910): pid=5897 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:07:02.702418 kernel: audit: type=1103 audit(1768586822.690:911): pid=5897 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:07:02.690000 audit[5897]: CRED_ACQ pid=5897 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:07:02.713209 kernel: audit: type=1006 audit(1768586822.690:912): pid=5897 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 16 18:07:02.690000 audit[5897]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff6a5b720 a2=3 a3=0 items=0 ppid=1 pid=5897 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:02.723701 kernel: audit: type=1300 audit(1768586822.690:912): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff6a5b720 a2=3 a3=0 items=0 ppid=1 pid=5897 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:02.724205 systemd-logind[1935]: New session 28 of user core. Jan 16 18:07:02.729524 kernel: audit: type=1327 audit(1768586822.690:912): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:07:02.690000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:07:02.732492 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 16 18:07:02.741754 containerd[1954]: time="2026-01-16T18:07:02.741698053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:07:02.749000 audit[5897]: USER_START pid=5897 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:07:02.761000 audit[5903]: CRED_ACQ pid=5903 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:07:02.771282 kernel: audit: type=1105 audit(1768586822.749:913): pid=5897 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:07:02.771378 kernel: audit: type=1103 audit(1768586822.761:914): pid=5903 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:07:03.026302 containerd[1954]: time="2026-01-16T18:07:03.026099411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:07:03.029199 containerd[1954]: time="2026-01-16T18:07:03.029085827Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:07:03.029387 containerd[1954]: time="2026-01-16T18:07:03.029258399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:07:03.030706 kubelet[3428]: E0116 18:07:03.030421 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:07:03.030706 kubelet[3428]: E0116 18:07:03.030495 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:07:03.031394 kubelet[3428]: E0116 18:07:03.030679 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rbr88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54df9c5477-tm29z_calico-apiserver(8adc081a-39d6-4153-ae00-f3df7e2ba175): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:07:03.032262 kubelet[3428]: E0116 18:07:03.031992 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-tm29z" podUID="8adc081a-39d6-4153-ae00-f3df7e2ba175" Jan 16 18:07:03.113337 sshd[5903]: Connection closed by 4.153.228.146 port 49960 Jan 16 18:07:03.115536 sshd-session[5897]: pam_unix(sshd:session): session closed for user core Jan 16 18:07:03.118000 audit[5897]: USER_END pid=5897 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:07:03.125899 systemd[1]: sshd@26-172.31.22.249:22-4.153.228.146:49960.service: Deactivated successfully. Jan 16 18:07:03.120000 audit[5897]: CRED_DISP pid=5897 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:07:03.132524 systemd[1]: session-28.scope: Deactivated successfully. Jan 16 18:07:03.135742 kernel: audit: type=1106 audit(1768586823.118:915): pid=5897 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:07:03.136110 kernel: audit: type=1104 audit(1768586823.120:916): pid=5897 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 16 18:07:03.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.22.249:22-4.153.228.146:49960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:07:03.139975 systemd-logind[1935]: Session 28 logged out. Waiting for processes to exit. Jan 16 18:07:03.146340 systemd-logind[1935]: Removed session 28. Jan 16 18:07:06.736844 containerd[1954]: time="2026-01-16T18:07:06.736399397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 18:07:07.025628 containerd[1954]: time="2026-01-16T18:07:07.025310235Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:07:07.027679 containerd[1954]: time="2026-01-16T18:07:07.027569679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 18:07:07.027830 containerd[1954]: time="2026-01-16T18:07:07.027644763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 18:07:07.028141 kubelet[3428]: E0116 18:07:07.028065 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:07:07.029449 kubelet[3428]: E0116 18:07:07.028726 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:07:07.029449 kubelet[3428]: E0116 18:07:07.028891 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:90e9697f810349ed85f10d175bcdf8e8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g74js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c9648cbf-qbkfw_calico-system(170a3e0a-b54a-4909-a38e-9cdfe9da4171): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 18:07:07.032158 containerd[1954]: time="2026-01-16T18:07:07.031683003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 18:07:07.296304 containerd[1954]: time="2026-01-16T18:07:07.296161888Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:07:07.300779 containerd[1954]: time="2026-01-16T18:07:07.300626740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 18:07:07.301227 containerd[1954]: time="2026-01-16T18:07:07.300688768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 18:07:07.301569 kubelet[3428]: E0116 18:07:07.301498 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:07:07.301765 kubelet[3428]: E0116 18:07:07.301706 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:07:07.302280 kubelet[3428]: E0116 18:07:07.302152 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g74js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c9648cbf-qbkfw_calico-system(170a3e0a-b54a-4909-a38e-9cdfe9da4171): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 18:07:07.303527 kubelet[3428]: E0116 18:07:07.303445 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c9648cbf-qbkfw" podUID="170a3e0a-b54a-4909-a38e-9cdfe9da4171" Jan 16 18:07:07.737162 kubelet[3428]: E0116 18:07:07.736914 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76db965947-cmgcq" podUID="d70fefd9-029e-4ddf-8559-5e71028d4fd0" Jan 16 18:07:10.735835 containerd[1954]: time="2026-01-16T18:07:10.735729573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 18:07:11.038074 containerd[1954]: time="2026-01-16T18:07:11.037888182Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:07:11.042469 containerd[1954]: time="2026-01-16T18:07:11.042301027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 18:07:11.042631 containerd[1954]: time="2026-01-16T18:07:11.042546391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 18:07:11.043511 kubelet[3428]: E0116 18:07:11.043436 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:07:11.044087 kubelet[3428]: E0116 18:07:11.043514 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:07:11.044087 kubelet[3428]: E0116 18:07:11.043823 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4p5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-88slq_calico-system(74813863-8ca6-40d9-bd92-5b37511fc2e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 18:07:11.045320 containerd[1954]: time="2026-01-16T18:07:11.045046987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:07:11.333772 containerd[1954]: time="2026-01-16T18:07:11.333571016Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:07:11.336157 containerd[1954]: time="2026-01-16T18:07:11.335763932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:07:11.336537 containerd[1954]: time="2026-01-16T18:07:11.335894408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:07:11.336724 kubelet[3428]: E0116 18:07:11.336671 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:07:11.336862 kubelet[3428]: E0116 18:07:11.336739 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:07:11.337139 kubelet[3428]: E0116 18:07:11.337034 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkph5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54df9c5477-js6cv_calico-apiserver(20a749f5-b28f-4523-9135-e8877a359519): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:07:11.337707 containerd[1954]: time="2026-01-16T18:07:11.337420220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 18:07:11.338725 kubelet[3428]: E0116 18:07:11.338669 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-js6cv" podUID="20a749f5-b28f-4523-9135-e8877a359519" Jan 16 18:07:11.620992 containerd[1954]: time="2026-01-16T18:07:11.620712717Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:07:11.623093 containerd[1954]: time="2026-01-16T18:07:11.622948233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 18:07:11.623405 containerd[1954]: time="2026-01-16T18:07:11.623076393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 18:07:11.623800 kubelet[3428]: E0116 18:07:11.623730 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:07:11.623897 kubelet[3428]: E0116 18:07:11.623807 3428 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:07:11.624090 kubelet[3428]: E0116 18:07:11.624002 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4p5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-88slq_calico-system(74813863-8ca6-40d9-bd92-5b37511fc2e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 18:07:11.625377 kubelet[3428]: E0116 18:07:11.625304 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-88slq" podUID="74813863-8ca6-40d9-bd92-5b37511fc2e0" Jan 16 18:07:12.734800 kubelet[3428]: E0116 18:07:12.734738 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t9x2h" podUID="3932bc10-72fd-4993-add3-bcb26a36ba2d" Jan 16 18:07:16.735166 kubelet[3428]: E0116 18:07:16.735064 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-tm29z" podUID="8adc081a-39d6-4153-ae00-f3df7e2ba175" Jan 16 18:07:17.675010 systemd[1]: cri-containerd-a4ff43560cfe6e6219d9bcd7f5b8db0b3a54fa69b29fc706f5f46ff96880e7d1.scope: Deactivated successfully. Jan 16 18:07:17.676105 systemd[1]: cri-containerd-a4ff43560cfe6e6219d9bcd7f5b8db0b3a54fa69b29fc706f5f46ff96880e7d1.scope: Consumed 27.104s CPU time, 104.8M memory peak. Jan 16 18:07:17.682552 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:07:17.682683 kernel: audit: type=1334 audit(1768586837.678:918): prog-id=149 op=UNLOAD Jan 16 18:07:17.678000 audit: BPF prog-id=149 op=UNLOAD Jan 16 18:07:17.683341 containerd[1954]: time="2026-01-16T18:07:17.683276200Z" level=info msg="received container exit event container_id:\"a4ff43560cfe6e6219d9bcd7f5b8db0b3a54fa69b29fc706f5f46ff96880e7d1\" id:\"a4ff43560cfe6e6219d9bcd7f5b8db0b3a54fa69b29fc706f5f46ff96880e7d1\" pid:3747 exit_status:1 exited_at:{seconds:1768586837 nanos:681885147}" Jan 16 18:07:17.684463 kernel: audit: type=1334 audit(1768586837.678:919): prog-id=153 op=UNLOAD Jan 16 18:07:17.678000 audit: BPF prog-id=153 op=UNLOAD Jan 16 18:07:17.725040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4ff43560cfe6e6219d9bcd7f5b8db0b3a54fa69b29fc706f5f46ff96880e7d1-rootfs.mount: Deactivated successfully. Jan 16 18:07:17.775840 systemd[1]: cri-containerd-1f406ebd7fc3f7623765978e044de838cb45e23f2a0b1e70e54ba4a71c921342.scope: Deactivated successfully. Jan 16 18:07:17.777673 systemd[1]: cri-containerd-1f406ebd7fc3f7623765978e044de838cb45e23f2a0b1e70e54ba4a71c921342.scope: Consumed 5.978s CPU time, 54M memory peak. Jan 16 18:07:17.779000 audit: BPF prog-id=259 op=LOAD Jan 16 18:07:17.784859 containerd[1954]: time="2026-01-16T18:07:17.784796956Z" level=info msg="received container exit event container_id:\"1f406ebd7fc3f7623765978e044de838cb45e23f2a0b1e70e54ba4a71c921342\" id:\"1f406ebd7fc3f7623765978e044de838cb45e23f2a0b1e70e54ba4a71c921342\" pid:3262 exit_status:1 exited_at:{seconds:1768586837 nanos:783690388}" Jan 16 18:07:17.786807 kernel: audit: type=1334 audit(1768586837.779:920): prog-id=259 op=LOAD Jan 16 18:07:17.786913 kernel: audit: type=1334 audit(1768586837.779:921): prog-id=91 op=UNLOAD Jan 16 18:07:17.779000 audit: BPF prog-id=91 op=UNLOAD Jan 16 18:07:17.782000 audit: BPF prog-id=106 op=UNLOAD Jan 16 18:07:17.789302 kernel: audit: type=1334 audit(1768586837.782:922): prog-id=106 op=UNLOAD Jan 16 18:07:17.782000 audit: BPF prog-id=110 op=UNLOAD Jan 16 18:07:17.791555 kernel: audit: type=1334 audit(1768586837.782:923): prog-id=110 op=UNLOAD Jan 16 18:07:17.834537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f406ebd7fc3f7623765978e044de838cb45e23f2a0b1e70e54ba4a71c921342-rootfs.mount: Deactivated successfully. Jan 16 18:07:18.451597 kubelet[3428]: I0116 18:07:18.451526 3428 scope.go:117] "RemoveContainer" containerID="a4ff43560cfe6e6219d9bcd7f5b8db0b3a54fa69b29fc706f5f46ff96880e7d1" Jan 16 18:07:18.455443 containerd[1954]: time="2026-01-16T18:07:18.455366835Z" level=info msg="CreateContainer within sandbox \"694afc46986c4405d73f8c529f08a1df4f5242ca7bbebfd83c86f9943f892e85\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 16 18:07:18.458705 kubelet[3428]: I0116 18:07:18.458657 3428 scope.go:117] "RemoveContainer" containerID="1f406ebd7fc3f7623765978e044de838cb45e23f2a0b1e70e54ba4a71c921342" Jan 16 18:07:18.461985 containerd[1954]: time="2026-01-16T18:07:18.461938083Z" level=info msg="CreateContainer within sandbox \"0cf68e6ecfab145105983c9a62b949c9b291bbf3c6730a73f4e7efbe5f4a182b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 16 18:07:18.478461 containerd[1954]: time="2026-01-16T18:07:18.478398387Z" level=info msg="Container 210180b04ac9aaaa2eebc3e98d7495235e9ba6300604d84308521acd9cadfa2e: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:07:18.487778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1664343273.mount: Deactivated successfully. Jan 16 18:07:18.499708 containerd[1954]: time="2026-01-16T18:07:18.499632112Z" level=info msg="Container 9ea8170c6d8da20529e2ba99a085eccaadcd6a65e8d704c599f8d56e0d2a49b7: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:07:18.507604 containerd[1954]: time="2026-01-16T18:07:18.507482608Z" level=info msg="CreateContainer within sandbox \"694afc46986c4405d73f8c529f08a1df4f5242ca7bbebfd83c86f9943f892e85\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"210180b04ac9aaaa2eebc3e98d7495235e9ba6300604d84308521acd9cadfa2e\"" Jan 16 18:07:18.508250 containerd[1954]: time="2026-01-16T18:07:18.508087492Z" level=info msg="StartContainer for \"210180b04ac9aaaa2eebc3e98d7495235e9ba6300604d84308521acd9cadfa2e\"" Jan 16 18:07:18.509875 containerd[1954]: time="2026-01-16T18:07:18.509748340Z" level=info msg="connecting to shim 210180b04ac9aaaa2eebc3e98d7495235e9ba6300604d84308521acd9cadfa2e" address="unix:///run/containerd/s/3d208ab3ac49fa4a75a1ca7781353a447ff5fa2700a09a2a532624efc82c2dbb" protocol=ttrpc version=3 Jan 16 18:07:18.523724 containerd[1954]: time="2026-01-16T18:07:18.523622368Z" level=info msg="CreateContainer within sandbox \"0cf68e6ecfab145105983c9a62b949c9b291bbf3c6730a73f4e7efbe5f4a182b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9ea8170c6d8da20529e2ba99a085eccaadcd6a65e8d704c599f8d56e0d2a49b7\"" Jan 16 18:07:18.525359 containerd[1954]: time="2026-01-16T18:07:18.525184420Z" level=info msg="StartContainer for \"9ea8170c6d8da20529e2ba99a085eccaadcd6a65e8d704c599f8d56e0d2a49b7\"" Jan 16 18:07:18.529954 containerd[1954]: time="2026-01-16T18:07:18.529826464Z" level=info msg="connecting to shim 9ea8170c6d8da20529e2ba99a085eccaadcd6a65e8d704c599f8d56e0d2a49b7" address="unix:///run/containerd/s/6b9dbb0c925572450377404f15b1ff69b8f207b4ed3ee457093752bc272dd730" protocol=ttrpc version=3 Jan 16 18:07:18.548522 systemd[1]: Started cri-containerd-210180b04ac9aaaa2eebc3e98d7495235e9ba6300604d84308521acd9cadfa2e.scope - libcontainer container 210180b04ac9aaaa2eebc3e98d7495235e9ba6300604d84308521acd9cadfa2e. Jan 16 18:07:18.588913 systemd[1]: Started cri-containerd-9ea8170c6d8da20529e2ba99a085eccaadcd6a65e8d704c599f8d56e0d2a49b7.scope - libcontainer container 9ea8170c6d8da20529e2ba99a085eccaadcd6a65e8d704c599f8d56e0d2a49b7. Jan 16 18:07:18.595000 audit: BPF prog-id=260 op=LOAD Jan 16 18:07:18.602260 kernel: audit: type=1334 audit(1768586838.595:924): prog-id=260 op=LOAD Jan 16 18:07:18.602408 kernel: audit: type=1334 audit(1768586838.598:925): prog-id=261 op=LOAD Jan 16 18:07:18.598000 audit: BPF prog-id=261 op=LOAD Jan 16 18:07:18.598000 audit[5964]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3530 pid=5964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:18.610325 kernel: audit: type=1300 audit(1768586838.598:925): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3530 pid=5964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:18.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231303138306230346163396161616132656562633365393864373439 Jan 16 18:07:18.616867 kernel: audit: type=1327 audit(1768586838.598:925): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231303138306230346163396161616132656562633365393864373439 Jan 16 18:07:18.599000 audit: BPF prog-id=261 op=UNLOAD Jan 16 18:07:18.599000 audit[5964]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3530 pid=5964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:18.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231303138306230346163396161616132656562633365393864373439 Jan 16 18:07:18.599000 audit: BPF prog-id=262 op=LOAD Jan 16 18:07:18.599000 audit[5964]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3530 pid=5964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:18.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231303138306230346163396161616132656562633365393864373439 Jan 16 18:07:18.601000 audit: BPF prog-id=263 op=LOAD Jan 16 18:07:18.601000 audit[5964]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3530 pid=5964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:18.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231303138306230346163396161616132656562633365393864373439 Jan 16 18:07:18.602000 audit: BPF prog-id=263 op=UNLOAD Jan 16 18:07:18.602000 audit[5964]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3530 pid=5964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:18.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231303138306230346163396161616132656562633365393864373439 Jan 16 18:07:18.602000 audit: BPF prog-id=262 op=UNLOAD Jan 16 18:07:18.602000 audit[5964]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3530 pid=5964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:18.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231303138306230346163396161616132656562633365393864373439 Jan 16 18:07:18.602000 audit: BPF prog-id=264 op=LOAD Jan 16 18:07:18.602000 audit[5964]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3530 pid=5964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:18.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231303138306230346163396161616132656562633365393864373439 Jan 16 18:07:18.644000 audit: BPF prog-id=265 op=LOAD Jan 16 18:07:18.646000 audit: BPF prog-id=266 op=LOAD Jan 16 18:07:18.646000 audit[5977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000228180 a2=98 a3=0 items=0 ppid=3106 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:18.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613831373063366438646132303532396532626139396130383565 Jan 16 18:07:18.646000 audit: BPF prog-id=266 op=UNLOAD Jan 16 18:07:18.646000 audit[5977]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3106 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:18.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613831373063366438646132303532396532626139396130383565 Jan 16 18:07:18.647000 audit: BPF prog-id=267 op=LOAD Jan 16 18:07:18.647000 audit[5977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002283e8 a2=98 a3=0 items=0 ppid=3106 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:18.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613831373063366438646132303532396532626139396130383565 Jan 16 18:07:18.647000 audit: BPF prog-id=268 op=LOAD Jan 16 18:07:18.647000 audit[5977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000228168 a2=98 a3=0 items=0 ppid=3106 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:18.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613831373063366438646132303532396532626139396130383565 Jan 16 18:07:18.648000 audit: BPF prog-id=268 op=UNLOAD Jan 16 18:07:18.648000 audit[5977]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3106 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:18.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613831373063366438646132303532396532626139396130383565 Jan 16 18:07:18.648000 audit: BPF prog-id=267 op=UNLOAD Jan 16 18:07:18.648000 audit[5977]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3106 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:18.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613831373063366438646132303532396532626139396130383565 Jan 16 18:07:18.649000 audit: BPF prog-id=269 op=LOAD Jan 16 18:07:18.649000 audit[5977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000228648 a2=98 a3=0 items=0 ppid=3106 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:18.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613831373063366438646132303532396532626139396130383565 Jan 16 18:07:18.669879 containerd[1954]: time="2026-01-16T18:07:18.669819544Z" level=info msg="StartContainer for \"210180b04ac9aaaa2eebc3e98d7495235e9ba6300604d84308521acd9cadfa2e\" returns successfully" Jan 16 18:07:18.737245 kubelet[3428]: E0116 18:07:18.736288 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76db965947-cmgcq" podUID="d70fefd9-029e-4ddf-8559-5e71028d4fd0" Jan 16 18:07:18.737064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1110330744.mount: Deactivated successfully. Jan 16 18:07:18.757169 containerd[1954]: time="2026-01-16T18:07:18.757086569Z" level=info msg="StartContainer for \"9ea8170c6d8da20529e2ba99a085eccaadcd6a65e8d704c599f8d56e0d2a49b7\" returns successfully" Jan 16 18:07:21.735433 kubelet[3428]: E0116 18:07:21.735266 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c9648cbf-qbkfw" podUID="170a3e0a-b54a-4909-a38e-9cdfe9da4171" Jan 16 18:07:23.735753 kubelet[3428]: E0116 18:07:23.735561 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-88slq" podUID="74813863-8ca6-40d9-bd92-5b37511fc2e0" Jan 16 18:07:23.783872 kubelet[3428]: E0116 18:07:23.783780 3428 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-249?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 16 18:07:23.807267 systemd[1]: cri-containerd-7d7114f73586481f9cd8e40feffe9cbad3b034778a89dd322699c0d137a069f0.scope: Deactivated successfully. Jan 16 18:07:23.808894 systemd[1]: cri-containerd-7d7114f73586481f9cd8e40feffe9cbad3b034778a89dd322699c0d137a069f0.scope: Consumed 4.673s CPU time, 20.3M memory peak. Jan 16 18:07:23.814721 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 16 18:07:23.814840 kernel: audit: type=1334 audit(1768586843.810:940): prog-id=270 op=LOAD Jan 16 18:07:23.810000 audit: BPF prog-id=270 op=LOAD Jan 16 18:07:23.812000 audit: BPF prog-id=96 op=UNLOAD Jan 16 18:07:23.816716 containerd[1954]: time="2026-01-16T18:07:23.816657610Z" level=info msg="received container exit event container_id:\"7d7114f73586481f9cd8e40feffe9cbad3b034778a89dd322699c0d137a069f0\" id:\"7d7114f73586481f9cd8e40feffe9cbad3b034778a89dd322699c0d137a069f0\" pid:3271 exit_status:1 exited_at:{seconds:1768586843 nanos:812266558}" Jan 16 18:07:23.816000 audit: BPF prog-id=111 op=UNLOAD Jan 16 18:07:23.820324 kernel: audit: type=1334 audit(1768586843.812:941): prog-id=96 op=UNLOAD Jan 16 18:07:23.820544 kernel: audit: type=1334 audit(1768586843.816:942): prog-id=111 op=UNLOAD Jan 16 18:07:23.820592 kernel: audit: type=1334 audit(1768586843.816:943): prog-id=115 op=UNLOAD Jan 16 18:07:23.816000 audit: BPF prog-id=115 op=UNLOAD Jan 16 18:07:23.864374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d7114f73586481f9cd8e40feffe9cbad3b034778a89dd322699c0d137a069f0-rootfs.mount: Deactivated successfully. Jan 16 18:07:24.489421 kubelet[3428]: I0116 18:07:24.489306 3428 scope.go:117] "RemoveContainer" containerID="7d7114f73586481f9cd8e40feffe9cbad3b034778a89dd322699c0d137a069f0" Jan 16 18:07:24.493722 containerd[1954]: time="2026-01-16T18:07:24.493641213Z" level=info msg="CreateContainer within sandbox \"7bc4297459c02ee725d6e752d422514e9c81c486f352e48d8d71da824e2b8493\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 16 18:07:24.517748 containerd[1954]: time="2026-01-16T18:07:24.515893773Z" level=info msg="Container c50c5d866b5ca81ae437b6e8d5682b4cb57281f49a969ce2ace8d5ec2b991346: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:07:24.535800 containerd[1954]: time="2026-01-16T18:07:24.535722262Z" level=info msg="CreateContainer within sandbox \"7bc4297459c02ee725d6e752d422514e9c81c486f352e48d8d71da824e2b8493\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c50c5d866b5ca81ae437b6e8d5682b4cb57281f49a969ce2ace8d5ec2b991346\"" Jan 16 18:07:24.536664 containerd[1954]: time="2026-01-16T18:07:24.536603410Z" level=info msg="StartContainer for \"c50c5d866b5ca81ae437b6e8d5682b4cb57281f49a969ce2ace8d5ec2b991346\"" Jan 16 18:07:24.539352 containerd[1954]: time="2026-01-16T18:07:24.539245582Z" level=info msg="connecting to shim c50c5d866b5ca81ae437b6e8d5682b4cb57281f49a969ce2ace8d5ec2b991346" address="unix:///run/containerd/s/ec21f3bb03eefe9435b21c0645969036483c31d39d4a7d37317cf1b9525a4032" protocol=ttrpc version=3 Jan 16 18:07:24.590495 systemd[1]: Started cri-containerd-c50c5d866b5ca81ae437b6e8d5682b4cb57281f49a969ce2ace8d5ec2b991346.scope - libcontainer container c50c5d866b5ca81ae437b6e8d5682b4cb57281f49a969ce2ace8d5ec2b991346. Jan 16 18:07:24.615000 audit: BPF prog-id=271 op=LOAD Jan 16 18:07:24.617000 audit: BPF prog-id=272 op=LOAD Jan 16 18:07:24.620136 kernel: audit: type=1334 audit(1768586844.615:944): prog-id=271 op=LOAD Jan 16 18:07:24.620255 kernel: audit: type=1334 audit(1768586844.617:945): prog-id=272 op=LOAD Jan 16 18:07:24.617000 audit[6037]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3135 pid=6037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:24.627267 kernel: audit: type=1300 audit(1768586844.617:945): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3135 pid=6037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:24.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335306335643836366235636138316165343337623665386435363832 Jan 16 18:07:24.633453 kernel: audit: type=1327 audit(1768586844.617:945): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335306335643836366235636138316165343337623665386435363832 Jan 16 18:07:24.617000 audit: BPF prog-id=272 op=UNLOAD Jan 16 18:07:24.635216 kernel: audit: type=1334 audit(1768586844.617:946): prog-id=272 op=UNLOAD Jan 16 18:07:24.617000 audit[6037]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=6037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:24.641410 kernel: audit: type=1300 audit(1768586844.617:946): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=6037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:24.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335306335643836366235636138316165343337623665386435363832 Jan 16 18:07:24.619000 audit: BPF prog-id=273 op=LOAD Jan 16 18:07:24.619000 audit[6037]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3135 pid=6037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:24.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335306335643836366235636138316165343337623665386435363832 Jan 16 18:07:24.619000 audit: BPF prog-id=274 op=LOAD Jan 16 18:07:24.619000 audit[6037]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3135 pid=6037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:24.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335306335643836366235636138316165343337623665386435363832 Jan 16 18:07:24.619000 audit: BPF prog-id=274 op=UNLOAD Jan 16 18:07:24.619000 audit[6037]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=6037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:24.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335306335643836366235636138316165343337623665386435363832 Jan 16 18:07:24.619000 audit: BPF prog-id=273 op=UNLOAD Jan 16 18:07:24.619000 audit[6037]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=6037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:24.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335306335643836366235636138316165343337623665386435363832 Jan 16 18:07:24.619000 audit: BPF prog-id=275 op=LOAD Jan 16 18:07:24.619000 audit[6037]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3135 pid=6037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:24.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335306335643836366235636138316165343337623665386435363832 Jan 16 18:07:24.699061 containerd[1954]: time="2026-01-16T18:07:24.698925022Z" level=info msg="StartContainer for \"c50c5d866b5ca81ae437b6e8d5682b4cb57281f49a969ce2ace8d5ec2b991346\" returns successfully" Jan 16 18:07:24.735082 kubelet[3428]: E0116 18:07:24.735013 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t9x2h" podUID="3932bc10-72fd-4993-add3-bcb26a36ba2d" Jan 16 18:07:26.735797 kubelet[3428]: E0116 18:07:26.735732 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54df9c5477-js6cv" podUID="20a749f5-b28f-4523-9135-e8877a359519"