Jan 13 23:41:35.991383 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 13 23:41:35.991430 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Jan 13 21:43:11 -00 2026 Jan 13 23:41:35.991455 kernel: KASLR disabled due to lack of seed Jan 13 23:41:35.991472 kernel: efi: EFI v2.7 by EDK II Jan 13 23:41:35.991489 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78557598 Jan 13 23:41:35.991505 kernel: secureboot: Secure boot disabled Jan 13 23:41:35.991524 kernel: ACPI: Early table checksum verification disabled Jan 13 23:41:35.991540 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 13 23:41:35.991557 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 13 23:41:35.991577 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 23:41:35.991595 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 13 23:41:35.991610 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 13 23:41:35.991626 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 23:41:35.991643 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 13 23:41:35.991666 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 13 23:41:35.991683 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 13 23:41:35.991700 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 23:41:35.991717 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 13 23:41:35.991734 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 13 23:41:35.991751 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 13 23:41:35.991768 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 13 23:41:35.991785 kernel: printk: legacy bootconsole [uart0] enabled Jan 13 23:41:35.991802 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 13 23:41:35.991819 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 23:41:35.991840 kernel: NODE_DATA(0) allocated [mem 0x4b584ea00-0x4b5855fff] Jan 13 23:41:35.991857 kernel: Zone ranges: Jan 13 23:41:35.991874 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 23:41:35.991890 kernel: DMA32 empty Jan 13 23:41:35.991907 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 13 23:41:35.991924 kernel: Device empty Jan 13 23:41:35.991941 kernel: Movable zone start for each node Jan 13 23:41:35.991958 kernel: Early memory node ranges Jan 13 23:41:35.991975 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 13 23:41:35.991993 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 13 23:41:35.992010 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 13 23:41:35.995593 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 13 23:41:35.995646 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 13 23:41:35.995666 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 13 23:41:35.995684 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 13 23:41:35.995702 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 13 23:41:35.995728 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 23:41:35.995750 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 13 23:41:35.995769 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 13 23:41:35.995787 kernel: psci: probing for conduit method from ACPI. Jan 13 23:41:35.995805 kernel: psci: PSCIv1.0 detected in firmware. Jan 13 23:41:35.995823 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 23:41:35.995841 kernel: psci: Trusted OS migration not required Jan 13 23:41:35.995859 kernel: psci: SMC Calling Convention v1.1 Jan 13 23:41:35.995877 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 13 23:41:35.995895 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 13 23:41:35.995917 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 13 23:41:35.995935 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 23:41:35.995953 kernel: Detected PIPT I-cache on CPU0 Jan 13 23:41:35.995971 kernel: CPU features: detected: GIC system register CPU interface Jan 13 23:41:35.995990 kernel: CPU features: detected: Spectre-v2 Jan 13 23:41:35.996008 kernel: CPU features: detected: Spectre-v3a Jan 13 23:41:35.997401 kernel: CPU features: detected: Spectre-BHB Jan 13 23:41:35.997423 kernel: CPU features: detected: ARM erratum 1742098 Jan 13 23:41:35.997442 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 13 23:41:35.997460 kernel: alternatives: applying boot alternatives Jan 13 23:41:35.997480 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a2e92265a189403c21ae2a2ae9e6d4fed0782e0e430fbcb369a7bb0db156274f Jan 13 23:41:35.997509 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 23:41:35.997527 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 23:41:35.997545 kernel: Fallback order for Node 0: 0 Jan 13 23:41:35.997563 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 13 23:41:35.997581 kernel: Policy zone: Normal Jan 13 23:41:35.997600 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 23:41:35.997619 kernel: software IO TLB: area num 2. Jan 13 23:41:35.997638 kernel: software IO TLB: mapped [mem 0x000000006f800000-0x0000000073800000] (64MB) Jan 13 23:41:35.997656 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 23:41:35.997674 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 23:41:35.997698 kernel: rcu: RCU event tracing is enabled. Jan 13 23:41:35.997717 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 23:41:35.997735 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 23:41:35.997753 kernel: Tracing variant of Tasks RCU enabled. Jan 13 23:41:35.997770 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 23:41:35.997788 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 23:41:35.997806 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 23:41:35.997825 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 23:41:35.997842 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 23:41:35.997860 kernel: GICv3: 96 SPIs implemented Jan 13 23:41:35.997878 kernel: GICv3: 0 Extended SPIs implemented Jan 13 23:41:35.997899 kernel: Root IRQ handler: gic_handle_irq Jan 13 23:41:35.997917 kernel: GICv3: GICv3 features: 16 PPIs Jan 13 23:41:35.997935 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 13 23:41:35.997952 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 13 23:41:35.997970 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 13 23:41:35.997988 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 23:41:35.998006 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 13 23:41:35.998049 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 13 23:41:35.998068 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 13 23:41:35.998086 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 13 23:41:35.998104 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 23:41:35.998128 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 13 23:41:35.998146 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 13 23:41:35.998164 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 13 23:41:35.998182 kernel: Console: colour dummy device 80x25 Jan 13 23:41:35.998201 kernel: printk: legacy console [tty1] enabled Jan 13 23:41:35.998221 kernel: ACPI: Core revision 20240827 Jan 13 23:41:35.998240 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 13 23:41:35.998259 kernel: pid_max: default: 32768 minimum: 301 Jan 13 23:41:35.998281 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 13 23:41:35.998300 kernel: landlock: Up and running. Jan 13 23:41:35.998318 kernel: SELinux: Initializing. Jan 13 23:41:35.998336 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 23:41:35.998355 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 23:41:35.998373 kernel: rcu: Hierarchical SRCU implementation. Jan 13 23:41:35.998392 kernel: rcu: Max phase no-delay instances is 400. Jan 13 23:41:35.998410 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 13 23:41:35.998433 kernel: Remapping and enabling EFI services. Jan 13 23:41:35.998451 kernel: smp: Bringing up secondary CPUs ... Jan 13 23:41:35.998470 kernel: Detected PIPT I-cache on CPU1 Jan 13 23:41:35.998488 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 13 23:41:35.998507 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 13 23:41:35.998525 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 13 23:41:35.998544 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 23:41:35.998566 kernel: SMP: Total of 2 processors activated. Jan 13 23:41:35.998585 kernel: CPU: All CPU(s) started at EL1 Jan 13 23:41:35.998614 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 23:41:35.998636 kernel: CPU features: detected: 32-bit EL1 Support Jan 13 23:41:35.998656 kernel: CPU features: detected: CRC32 instructions Jan 13 23:41:35.998675 kernel: alternatives: applying system-wide alternatives Jan 13 23:41:35.998695 kernel: Memory: 3823340K/4030464K available (11200K kernel code, 2458K rwdata, 9092K rodata, 12480K init, 1038K bss, 185776K reserved, 16384K cma-reserved) Jan 13 23:41:35.998714 kernel: devtmpfs: initialized Jan 13 23:41:35.998738 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 23:41:35.998758 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 23:41:35.998777 kernel: 23632 pages in range for non-PLT usage Jan 13 23:41:35.998796 kernel: 515152 pages in range for PLT usage Jan 13 23:41:35.998815 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 23:41:35.998838 kernel: SMBIOS 3.0.0 present. Jan 13 23:41:35.998857 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 13 23:41:35.998876 kernel: DMI: Memory slots populated: 0/0 Jan 13 23:41:35.998895 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 23:41:35.998914 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 23:41:35.998934 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 23:41:35.998953 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 23:41:35.998977 kernel: audit: initializing netlink subsys (disabled) Jan 13 23:41:35.998996 kernel: audit: type=2000 audit(0.227:1): state=initialized audit_enabled=0 res=1 Jan 13 23:41:36.000825 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 23:41:36.000869 kernel: cpuidle: using governor menu Jan 13 23:41:36.000890 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 23:41:36.000911 kernel: ASID allocator initialised with 65536 entries Jan 13 23:41:36.000932 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 23:41:36.000963 kernel: Serial: AMBA PL011 UART driver Jan 13 23:41:36.000985 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 23:41:36.001007 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 23:41:36.001059 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 23:41:36.001080 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 23:41:36.001100 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 23:41:36.001119 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 23:41:36.001144 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 23:41:36.001163 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 23:41:36.001182 kernel: ACPI: Added _OSI(Module Device) Jan 13 23:41:36.001202 kernel: ACPI: Added _OSI(Processor Device) Jan 13 23:41:36.001221 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 23:41:36.001240 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 23:41:36.001259 kernel: ACPI: Interpreter enabled Jan 13 23:41:36.001282 kernel: ACPI: Using GIC for interrupt routing Jan 13 23:41:36.001301 kernel: ACPI: MCFG table detected, 1 entries Jan 13 23:41:36.001320 kernel: ACPI: CPU0 has been hot-added Jan 13 23:41:36.001339 kernel: ACPI: CPU1 has been hot-added Jan 13 23:41:36.001358 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 13 23:41:36.001728 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 23:41:36.001998 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 23:41:36.004862 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 23:41:36.005201 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 13 23:41:36.005540 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 13 23:41:36.005574 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 13 23:41:36.005595 kernel: acpiphp: Slot [1] registered Jan 13 23:41:36.005614 kernel: acpiphp: Slot [2] registered Jan 13 23:41:36.005645 kernel: acpiphp: Slot [3] registered Jan 13 23:41:36.005665 kernel: acpiphp: Slot [4] registered Jan 13 23:41:36.005685 kernel: acpiphp: Slot [5] registered Jan 13 23:41:36.005704 kernel: acpiphp: Slot [6] registered Jan 13 23:41:36.005723 kernel: acpiphp: Slot [7] registered Jan 13 23:41:36.005742 kernel: acpiphp: Slot [8] registered Jan 13 23:41:36.005761 kernel: acpiphp: Slot [9] registered Jan 13 23:41:36.005780 kernel: acpiphp: Slot [10] registered Jan 13 23:41:36.005803 kernel: acpiphp: Slot [11] registered Jan 13 23:41:36.005823 kernel: acpiphp: Slot [12] registered Jan 13 23:41:36.005841 kernel: acpiphp: Slot [13] registered Jan 13 23:41:36.005860 kernel: acpiphp: Slot [14] registered Jan 13 23:41:36.005880 kernel: acpiphp: Slot [15] registered Jan 13 23:41:36.005899 kernel: acpiphp: Slot [16] registered Jan 13 23:41:36.005918 kernel: acpiphp: Slot [17] registered Jan 13 23:41:36.005940 kernel: acpiphp: Slot [18] registered Jan 13 23:41:36.005959 kernel: acpiphp: Slot [19] registered Jan 13 23:41:36.005978 kernel: acpiphp: Slot [20] registered Jan 13 23:41:36.005998 kernel: acpiphp: Slot [21] registered Jan 13 23:41:36.006040 kernel: acpiphp: Slot [22] registered Jan 13 23:41:36.006063 kernel: acpiphp: Slot [23] registered Jan 13 23:41:36.006083 kernel: acpiphp: Slot [24] registered Jan 13 23:41:36.006108 kernel: acpiphp: Slot [25] registered Jan 13 23:41:36.006127 kernel: acpiphp: Slot [26] registered Jan 13 23:41:36.006146 kernel: acpiphp: Slot [27] registered Jan 13 23:41:36.006165 kernel: acpiphp: Slot [28] registered Jan 13 23:41:36.006184 kernel: acpiphp: Slot [29] registered Jan 13 23:41:36.006203 kernel: acpiphp: Slot [30] registered Jan 13 23:41:36.006222 kernel: acpiphp: Slot [31] registered Jan 13 23:41:36.006241 kernel: PCI host bridge to bus 0000:00 Jan 13 23:41:36.006518 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 13 23:41:36.006758 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 23:41:36.006991 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 13 23:41:36.009364 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 13 23:41:36.009679 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 13 23:41:36.009971 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 13 23:41:36.011382 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 13 23:41:36.011689 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 13 23:41:36.011950 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 13 23:41:36.013442 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 23:41:36.013745 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 13 23:41:36.014005 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 13 23:41:36.014293 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 13 23:41:36.014549 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 13 23:41:36.014804 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 23:41:36.015601 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 13 23:41:36.015899 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 23:41:36.018073 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 13 23:41:36.018118 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 23:41:36.018139 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 23:41:36.018159 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 23:41:36.018179 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 23:41:36.018199 kernel: iommu: Default domain type: Translated Jan 13 23:41:36.018233 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 23:41:36.018257 kernel: efivars: Registered efivars operations Jan 13 23:41:36.018278 kernel: vgaarb: loaded Jan 13 23:41:36.018298 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 23:41:36.018318 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 23:41:36.018337 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 23:41:36.018356 kernel: pnp: PnP ACPI init Jan 13 23:41:36.018675 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 13 23:41:36.018710 kernel: pnp: PnP ACPI: found 1 devices Jan 13 23:41:36.018730 kernel: NET: Registered PF_INET protocol family Jan 13 23:41:36.018751 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 23:41:36.018771 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 23:41:36.018792 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 23:41:36.018812 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 23:41:36.018838 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 23:41:36.018858 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 23:41:36.018878 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 23:41:36.018899 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 23:41:36.018920 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 23:41:36.018940 kernel: PCI: CLS 0 bytes, default 64 Jan 13 23:41:36.018959 kernel: kvm [1]: HYP mode not available Jan 13 23:41:36.018982 kernel: Initialise system trusted keyrings Jan 13 23:41:36.019002 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 23:41:36.019125 kernel: Key type asymmetric registered Jan 13 23:41:36.019152 kernel: Asymmetric key parser 'x509' registered Jan 13 23:41:36.019173 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 13 23:41:36.019193 kernel: io scheduler mq-deadline registered Jan 13 23:41:36.019212 kernel: io scheduler kyber registered Jan 13 23:41:36.019239 kernel: io scheduler bfq registered Jan 13 23:41:36.019583 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 13 23:41:36.019625 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 23:41:36.019646 kernel: ACPI: button: Power Button [PWRB] Jan 13 23:41:36.019667 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 13 23:41:36.019688 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 23:41:36.019718 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 23:41:36.019741 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 23:41:36.020202 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 13 23:41:36.020247 kernel: printk: legacy console [ttyS0] disabled Jan 13 23:41:36.020269 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 13 23:41:36.020289 kernel: printk: legacy console [ttyS0] enabled Jan 13 23:41:36.020309 kernel: printk: legacy bootconsole [uart0] disabled Jan 13 23:41:36.020342 kernel: thunder_xcv, ver 1.0 Jan 13 23:41:36.020363 kernel: thunder_bgx, ver 1.0 Jan 13 23:41:36.020382 kernel: nicpf, ver 1.0 Jan 13 23:41:36.020402 kernel: nicvf, ver 1.0 Jan 13 23:41:36.020766 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 23:41:36.021903 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-13T23:41:32 UTC (1768347692) Jan 13 23:41:36.021956 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 23:41:36.021992 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 13 23:41:36.022043 kernel: NET: Registered PF_INET6 protocol family Jan 13 23:41:36.022071 kernel: watchdog: NMI not fully supported Jan 13 23:41:36.022092 kernel: watchdog: Hard watchdog permanently disabled Jan 13 23:41:36.022112 kernel: Segment Routing with IPv6 Jan 13 23:41:36.022133 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 23:41:36.022153 kernel: NET: Registered PF_PACKET protocol family Jan 13 23:41:36.022183 kernel: Key type dns_resolver registered Jan 13 23:41:36.022203 kernel: registered taskstats version 1 Jan 13 23:41:36.022224 kernel: Loading compiled-in X.509 certificates Jan 13 23:41:36.022244 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 61f104a5e4017e43c6bf0c9744e6a522053d7383' Jan 13 23:41:36.022263 kernel: Demotion targets for Node 0: null Jan 13 23:41:36.022283 kernel: Key type .fscrypt registered Jan 13 23:41:36.022302 kernel: Key type fscrypt-provisioning registered Jan 13 23:41:36.022328 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 23:41:36.022349 kernel: ima: Allocated hash algorithm: sha1 Jan 13 23:41:36.022368 kernel: ima: No architecture policies found Jan 13 23:41:36.022388 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 23:41:36.022408 kernel: clk: Disabling unused clocks Jan 13 23:41:36.022427 kernel: PM: genpd: Disabling unused power domains Jan 13 23:41:36.022447 kernel: Freeing unused kernel memory: 12480K Jan 13 23:41:36.022466 kernel: Run /init as init process Jan 13 23:41:36.022493 kernel: with arguments: Jan 13 23:41:36.022513 kernel: /init Jan 13 23:41:36.022532 kernel: with environment: Jan 13 23:41:36.022551 kernel: HOME=/ Jan 13 23:41:36.022571 kernel: TERM=linux Jan 13 23:41:36.022593 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 23:41:36.022904 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 23:41:36.023209 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 23:41:36.023246 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 23:41:36.023267 kernel: GPT:25804799 != 33554431 Jan 13 23:41:36.023286 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 23:41:36.023306 kernel: GPT:25804799 != 33554431 Jan 13 23:41:36.023326 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 23:41:36.023357 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 23:41:36.023377 kernel: SCSI subsystem initialized Jan 13 23:41:36.023397 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 23:41:36.023417 kernel: device-mapper: uevent: version 1.0.3 Jan 13 23:41:36.023437 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 13 23:41:36.023457 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 13 23:41:36.023477 kernel: raid6: neonx8 gen() 6481 MB/s Jan 13 23:41:36.023503 kernel: raid6: neonx4 gen() 6492 MB/s Jan 13 23:41:36.023523 kernel: raid6: neonx2 gen() 5427 MB/s Jan 13 23:41:36.023542 kernel: raid6: neonx1 gen() 3915 MB/s Jan 13 23:41:36.023562 kernel: raid6: int64x8 gen() 3599 MB/s Jan 13 23:41:36.023581 kernel: raid6: int64x4 gen() 3657 MB/s Jan 13 23:41:36.023600 kernel: raid6: int64x2 gen() 3546 MB/s Jan 13 23:41:36.023620 kernel: raid6: int64x1 gen() 2715 MB/s Jan 13 23:41:36.023643 kernel: raid6: using algorithm neonx4 gen() 6492 MB/s Jan 13 23:41:36.023663 kernel: raid6: .... xor() 4896 MB/s, rmw enabled Jan 13 23:41:36.023682 kernel: raid6: using neon recovery algorithm Jan 13 23:41:36.023702 kernel: xor: measuring software checksum speed Jan 13 23:41:36.023722 kernel: 8regs : 12938 MB/sec Jan 13 23:41:36.023742 kernel: 32regs : 12442 MB/sec Jan 13 23:41:36.023761 kernel: arm64_neon : 9196 MB/sec Jan 13 23:41:36.023785 kernel: xor: using function: 8regs (12938 MB/sec) Jan 13 23:41:36.023805 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 23:41:36.023825 kernel: BTRFS: device fsid 96ce121f-260d-446f-a0e2-a59fdf56d58c devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (222) Jan 13 23:41:36.023845 kernel: BTRFS info (device dm-0): first mount of filesystem 96ce121f-260d-446f-a0e2-a59fdf56d58c Jan 13 23:41:36.023865 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:41:36.023885 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 23:41:36.023904 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 23:41:36.023928 kernel: BTRFS info (device dm-0): enabling free space tree Jan 13 23:41:36.023947 kernel: loop: module loaded Jan 13 23:41:36.023967 kernel: loop0: detected capacity change from 0 to 91840 Jan 13 23:41:36.023987 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 23:41:36.024009 systemd[1]: Successfully made /usr/ read-only. Jan 13 23:41:36.024096 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 13 23:41:36.024127 systemd[1]: Detected virtualization amazon. Jan 13 23:41:36.024148 systemd[1]: Detected architecture arm64. Jan 13 23:41:36.024168 systemd[1]: Running in initrd. Jan 13 23:41:36.024188 systemd[1]: No hostname configured, using default hostname. Jan 13 23:41:36.024210 systemd[1]: Hostname set to . Jan 13 23:41:36.024231 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 13 23:41:36.024251 systemd[1]: Queued start job for default target initrd.target. Jan 13 23:41:36.024277 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 13 23:41:36.024298 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 23:41:36.024318 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 23:41:36.024341 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 23:41:36.024363 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 23:41:36.024405 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 23:41:36.024428 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 23:41:36.024450 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 23:41:36.024472 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 23:41:36.024494 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 13 23:41:36.024520 systemd[1]: Reached target paths.target - Path Units. Jan 13 23:41:36.024542 systemd[1]: Reached target slices.target - Slice Units. Jan 13 23:41:36.024563 systemd[1]: Reached target swap.target - Swaps. Jan 13 23:41:36.024585 systemd[1]: Reached target timers.target - Timer Units. Jan 13 23:41:36.024607 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 23:41:36.024629 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 23:41:36.024651 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 13 23:41:36.024680 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 23:41:36.024702 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 13 23:41:36.024724 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 23:41:36.024746 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 23:41:36.024768 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 23:41:36.024790 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 23:41:36.024812 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 23:41:36.024840 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 23:41:36.024863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 23:41:36.024885 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 23:41:36.024908 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 13 23:41:36.024930 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 23:41:36.024951 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 23:41:36.024973 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 23:41:36.025003 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:41:36.025061 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 23:41:36.025096 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 23:41:36.025118 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 23:41:36.025141 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 23:41:36.025239 systemd-journald[360]: Collecting audit messages is enabled. Jan 13 23:41:36.025296 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 23:41:36.025319 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 23:41:36.025343 kernel: audit: type=1130 audit(1768347695.993:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.025367 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 23:41:36.025395 kernel: Bridge firewalling registered Jan 13 23:41:36.025417 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 23:41:36.025441 systemd-journald[360]: Journal started Jan 13 23:41:36.025480 systemd-journald[360]: Runtime Journal (/run/log/journal/ec2d2eddd77080d261d9ff9bfa3fa364) is 8M, max 75.3M, 67.3M free. Jan 13 23:41:36.032865 kernel: audit: type=1130 audit(1768347696.020:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:35.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.008127 systemd-modules-load[361]: Inserted module 'br_netfilter' Jan 13 23:41:36.049645 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 23:41:36.049726 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 23:41:36.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.063556 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 23:41:36.076124 kernel: audit: type=1130 audit(1768347696.051:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.082445 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:41:36.093764 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 23:41:36.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.103803 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 23:41:36.117949 kernel: audit: type=1130 audit(1768347696.092:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.119138 kernel: audit: type=1130 audit(1768347696.095:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.122282 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 23:41:36.131946 systemd-tmpfiles[379]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 13 23:41:36.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.142692 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 23:41:36.140000 audit: BPF prog-id=6 op=LOAD Jan 13 23:41:36.148787 kernel: audit: type=1130 audit(1768347696.131:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.148859 kernel: audit: type=1334 audit(1768347696.140:8): prog-id=6 op=LOAD Jan 13 23:41:36.157294 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 23:41:36.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.173100 kernel: audit: type=1130 audit(1768347696.164:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.176267 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 23:41:36.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.183956 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 23:41:36.192055 kernel: audit: type=1130 audit(1768347696.180:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.230367 dracut-cmdline[400]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a2e92265a189403c21ae2a2ae9e6d4fed0782e0e430fbcb369a7bb0db156274f Jan 13 23:41:36.429741 systemd-resolved[388]: Positive Trust Anchors: Jan 13 23:41:36.429773 systemd-resolved[388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 23:41:36.429781 systemd-resolved[388]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 13 23:41:36.429841 systemd-resolved[388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 23:41:36.654066 kernel: Loading iSCSI transport class v2.0-870. Jan 13 23:41:36.702047 kernel: iscsi: registered transport (tcp) Jan 13 23:41:36.721136 kernel: random: crng init done Jan 13 23:41:36.725555 kernel: iscsi: registered transport (qla4xxx) Jan 13 23:41:36.725646 kernel: QLogic iSCSI HBA Driver Jan 13 23:41:36.725999 systemd-resolved[388]: Defaulting to hostname 'linux'. Jan 13 23:41:36.733218 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 23:41:36.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.738369 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 23:41:36.753251 kernel: audit: type=1130 audit(1768347696.732:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.789574 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 23:41:36.828175 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 23:41:36.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.837880 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 23:41:36.923796 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 23:41:36.929269 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 23:41:36.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:36.939348 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 23:41:36.996526 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 23:41:37.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:37.005000 audit: BPF prog-id=7 op=LOAD Jan 13 23:41:37.005000 audit: BPF prog-id=8 op=LOAD Jan 13 23:41:37.008357 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 23:41:37.076983 systemd-udevd[639]: Using default interface naming scheme 'v257'. Jan 13 23:41:37.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:37.102866 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 23:41:37.107771 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 23:41:37.168051 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 23:41:37.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:37.173000 audit: BPF prog-id=9 op=LOAD Jan 13 23:41:37.178188 dracut-pre-trigger[709]: rd.md=0: removing MD RAID activation Jan 13 23:41:37.181401 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 23:41:37.245132 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 23:41:37.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:37.256971 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 23:41:37.292377 systemd-networkd[751]: lo: Link UP Jan 13 23:41:37.292395 systemd-networkd[751]: lo: Gained carrier Jan 13 23:41:37.296129 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 23:41:37.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:37.303722 systemd[1]: Reached target network.target - Network. Jan 13 23:41:37.416531 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 23:41:37.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:37.427281 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 23:41:37.654946 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 23:41:37.655309 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:41:37.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:37.667087 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:41:37.674817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:41:37.709968 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 23:41:37.710074 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 13 23:41:37.721414 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 23:41:37.721844 kernel: nvme nvme0: using unchecked data buffer Jan 13 23:41:37.722110 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 23:41:37.740056 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:70:e1:f1:5d:43 Jan 13 23:41:37.743816 (udev-worker)[783]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:41:37.749393 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:41:37.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:37.767790 systemd-networkd[751]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:41:37.767806 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 23:41:37.778517 systemd-networkd[751]: eth0: Link UP Jan 13 23:41:37.778816 systemd-networkd[751]: eth0: Gained carrier Jan 13 23:41:37.778838 systemd-networkd[751]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:41:37.801199 systemd-networkd[751]: eth0: DHCPv4 address 172.31.22.81/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 23:41:37.882897 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 23:41:37.914322 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 23:41:37.944368 disk-uuid[872]: Primary Header is updated. Jan 13 23:41:37.944368 disk-uuid[872]: Secondary Entries is updated. Jan 13 23:41:37.944368 disk-uuid[872]: Secondary Header is updated. Jan 13 23:41:37.985802 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 23:41:38.028248 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 23:41:38.093716 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 23:41:38.430824 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 23:41:38.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:38.447142 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 23:41:38.450490 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 23:41:38.458091 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 23:41:38.463462 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 23:41:38.519168 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 23:41:38.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:39.086396 disk-uuid[877]: Warning: The kernel is still using the old partition table. Jan 13 23:41:39.086396 disk-uuid[877]: The new table will be used at the next reboot or after you Jan 13 23:41:39.086396 disk-uuid[877]: run partprobe(8) or kpartx(8) Jan 13 23:41:39.086396 disk-uuid[877]: The operation has completed successfully. Jan 13 23:41:39.107512 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 23:41:39.107990 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 23:41:39.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:39.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:39.114915 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 23:41:39.182068 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1094) Jan 13 23:41:39.186519 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:41:39.186571 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:41:39.232858 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 23:41:39.232956 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 13 23:41:39.243132 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:41:39.246196 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 23:41:39.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:39.251733 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 23:41:39.417336 systemd-networkd[751]: eth0: Gained IPv6LL Jan 13 23:41:40.513504 ignition[1113]: Ignition 2.24.0 Jan 13 23:41:40.513539 ignition[1113]: Stage: fetch-offline Jan 13 23:41:40.513967 ignition[1113]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:41:40.513998 ignition[1113]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:41:40.518771 ignition[1113]: Ignition finished successfully Jan 13 23:41:40.528105 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 23:41:40.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:40.533632 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 23:41:40.584207 ignition[1120]: Ignition 2.24.0 Jan 13 23:41:40.584719 ignition[1120]: Stage: fetch Jan 13 23:41:40.585118 ignition[1120]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:41:40.585141 ignition[1120]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:41:40.585288 ignition[1120]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:41:40.607896 ignition[1120]: PUT result: OK Jan 13 23:41:40.611716 ignition[1120]: parsed url from cmdline: "" Jan 13 23:41:40.611873 ignition[1120]: no config URL provided Jan 13 23:41:40.611895 ignition[1120]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 23:41:40.611932 ignition[1120]: no config at "/usr/lib/ignition/user.ign" Jan 13 23:41:40.611968 ignition[1120]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:41:40.616871 ignition[1120]: PUT result: OK Jan 13 23:41:40.616978 ignition[1120]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 23:41:40.622284 ignition[1120]: GET result: OK Jan 13 23:41:40.622478 ignition[1120]: parsing config with SHA512: bb712b293d7355e40a1d8b37a602872450d5eebd500a0faa21f8780bec12855bf3cfa2fe2c615cb26d797c687b86a54c0635c1c0bf20ed957226b042e8b8a341 Jan 13 23:41:40.638387 unknown[1120]: fetched base config from "system" Jan 13 23:41:40.638415 unknown[1120]: fetched base config from "system" Jan 13 23:41:40.638430 unknown[1120]: fetched user config from "aws" Jan 13 23:41:40.640767 ignition[1120]: fetch: fetch complete Jan 13 23:41:40.640820 ignition[1120]: fetch: fetch passed Jan 13 23:41:40.642393 ignition[1120]: Ignition finished successfully Jan 13 23:41:40.650921 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 23:41:40.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:40.658938 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 23:41:40.706860 ignition[1126]: Ignition 2.24.0 Jan 13 23:41:40.707427 ignition[1126]: Stage: kargs Jan 13 23:41:40.707831 ignition[1126]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:41:40.707854 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:41:40.707993 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:41:40.712642 ignition[1126]: PUT result: OK Jan 13 23:41:40.727340 ignition[1126]: kargs: kargs passed Jan 13 23:41:40.727504 ignition[1126]: Ignition finished successfully Jan 13 23:41:40.733712 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 23:41:40.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:40.740168 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 23:41:40.781687 ignition[1132]: Ignition 2.24.0 Jan 13 23:41:40.782249 ignition[1132]: Stage: disks Jan 13 23:41:40.782658 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:41:40.782681 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:41:40.782819 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:41:40.795217 ignition[1132]: PUT result: OK Jan 13 23:41:40.802996 ignition[1132]: disks: disks passed Jan 13 23:41:40.803205 ignition[1132]: Ignition finished successfully Jan 13 23:41:40.806546 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 23:41:40.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:40.812819 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 23:41:40.816984 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 23:41:40.825346 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 23:41:40.829989 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 23:41:40.835153 systemd[1]: Reached target basic.target - Basic System. Jan 13 23:41:40.841443 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 23:41:40.979502 systemd-fsck[1140]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 13 23:41:40.989923 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 23:41:41.002444 kernel: kauditd_printk_skb: 22 callbacks suppressed Jan 13 23:41:41.002485 kernel: audit: type=1130 audit(1768347700.995:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:40.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:41.004247 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 23:41:41.247059 kernel: EXT4-fs (nvme0n1p9): mounted filesystem b1eb7e1a-01a1-41b0-9b3c-5a37b4853d4d r/w with ordered data mode. Quota mode: none. Jan 13 23:41:41.248321 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 23:41:41.252809 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 23:41:41.305166 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 23:41:41.310068 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 23:41:41.315737 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 23:41:41.318157 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 23:41:41.318219 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 23:41:41.345157 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 23:41:41.351780 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 23:41:41.368069 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1159) Jan 13 23:41:41.374860 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:41:41.374941 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:41:41.384066 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 23:41:41.384171 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 13 23:41:41.386545 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 23:41:43.869878 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 23:41:43.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:43.891061 kernel: audit: type=1130 audit(1768347703.871:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:43.882595 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 23:41:43.886611 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 23:41:43.926000 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 23:41:43.932097 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:41:43.964563 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 23:41:43.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:43.978168 kernel: audit: type=1130 audit(1768347703.970:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:43.994282 ignition[1258]: INFO : Ignition 2.24.0 Jan 13 23:41:43.994282 ignition[1258]: INFO : Stage: mount Jan 13 23:41:43.998542 ignition[1258]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 23:41:43.998542 ignition[1258]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:41:43.998542 ignition[1258]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:41:44.019851 ignition[1258]: INFO : PUT result: OK Jan 13 23:41:44.031364 ignition[1258]: INFO : mount: mount passed Jan 13 23:41:44.037191 ignition[1258]: INFO : Ignition finished successfully Jan 13 23:41:44.042808 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 23:41:44.049808 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 23:41:44.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:44.059148 kernel: audit: type=1130 audit(1768347704.045:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:44.100768 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 23:41:44.145097 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1267) Jan 13 23:41:44.147091 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:41:44.149623 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:41:44.157703 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 23:41:44.157837 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 13 23:41:44.161718 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 23:41:44.214653 ignition[1284]: INFO : Ignition 2.24.0 Jan 13 23:41:44.214653 ignition[1284]: INFO : Stage: files Jan 13 23:41:44.219160 ignition[1284]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 23:41:44.219160 ignition[1284]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:41:44.219160 ignition[1284]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:41:44.230189 ignition[1284]: INFO : PUT result: OK Jan 13 23:41:44.236561 ignition[1284]: DEBUG : files: compiled without relabeling support, skipping Jan 13 23:41:44.240779 ignition[1284]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 23:41:44.240779 ignition[1284]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 23:41:44.315948 ignition[1284]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 23:41:44.319860 ignition[1284]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 23:41:44.324121 unknown[1284]: wrote ssh authorized keys file for user: core Jan 13 23:41:44.326822 ignition[1284]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 23:41:44.333529 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 13 23:41:44.338144 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 13 23:41:44.442388 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 23:41:44.593974 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 13 23:41:44.593974 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 23:41:44.602785 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 23:41:44.602785 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 23:41:44.602785 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 23:41:44.602785 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 23:41:44.602785 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 23:41:44.602785 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 23:41:44.602785 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 23:41:44.630726 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 23:41:44.630726 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 23:41:44.630726 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 13 23:41:44.630726 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 13 23:41:44.630726 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 13 23:41:44.630726 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 13 23:41:44.942454 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 23:41:45.347341 ignition[1284]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 13 23:41:45.347341 ignition[1284]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 23:41:45.409735 ignition[1284]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 23:41:45.432717 ignition[1284]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 23:41:45.436956 ignition[1284]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 23:41:45.436956 ignition[1284]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 23:41:45.436956 ignition[1284]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 23:41:45.436956 ignition[1284]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 23:41:45.436956 ignition[1284]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 23:41:45.436956 ignition[1284]: INFO : files: files passed Jan 13 23:41:45.436956 ignition[1284]: INFO : Ignition finished successfully Jan 13 23:41:45.462169 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 23:41:45.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.466462 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 23:41:45.475854 kernel: audit: type=1130 audit(1768347705.463:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.491420 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 23:41:45.497538 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 23:41:45.497753 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 23:41:45.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.519316 kernel: audit: type=1130 audit(1768347705.509:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.519394 kernel: audit: type=1131 audit(1768347705.509:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.541125 initrd-setup-root-after-ignition[1315]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 23:41:45.544905 initrd-setup-root-after-ignition[1315]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 23:41:45.549101 initrd-setup-root-after-ignition[1319]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 23:41:45.557131 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 23:41:45.569649 kernel: audit: type=1130 audit(1768347705.556:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.562845 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 23:41:45.571920 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 23:41:45.662845 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 23:41:45.665441 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 23:41:45.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.671397 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 23:41:45.683611 kernel: audit: type=1130 audit(1768347705.669:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.683671 kernel: audit: type=1131 audit(1768347705.669:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.683657 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 23:41:45.686595 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 23:41:45.688109 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 23:41:45.737130 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 23:41:45.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.744500 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 23:41:45.782597 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 13 23:41:45.783208 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 23:41:45.786151 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 23:41:45.790011 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 23:41:45.797629 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 23:41:45.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.797984 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 23:41:45.806080 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 23:41:45.816579 systemd[1]: Stopped target basic.target - Basic System. Jan 13 23:41:45.819103 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 23:41:45.823294 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 23:41:45.828195 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 23:41:45.832894 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 13 23:41:45.841078 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 23:41:45.848320 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 23:41:45.852011 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 23:41:45.857084 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 23:41:45.863933 systemd[1]: Stopped target swap.target - Swaps. Jan 13 23:41:45.866741 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 23:41:45.867071 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 23:41:45.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.876676 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 23:41:45.879959 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 23:41:45.888262 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 23:41:45.892180 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 23:41:45.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.895274 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 23:41:45.895520 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 23:41:45.901882 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 23:41:45.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.902632 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 23:41:45.909943 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 23:41:45.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.910220 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 23:41:45.923612 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 23:41:45.926467 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 23:41:45.926739 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 23:41:45.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.941204 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 23:41:45.945483 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 23:41:45.946791 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 23:41:45.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.959715 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 23:41:45.960526 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 23:41:45.972541 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 23:41:45.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:45.972785 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 23:41:45.993592 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 23:41:45.993802 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 23:41:46.011918 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 13 23:41:46.011998 kernel: audit: type=1130 audit(1768347706.005:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.013351 ignition[1339]: INFO : Ignition 2.24.0 Jan 13 23:41:46.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.023046 kernel: audit: type=1131 audit(1768347706.005:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.023132 ignition[1339]: INFO : Stage: umount Jan 13 23:41:46.023132 ignition[1339]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 23:41:46.023132 ignition[1339]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:41:46.023132 ignition[1339]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:41:46.038110 ignition[1339]: INFO : PUT result: OK Jan 13 23:41:46.046138 ignition[1339]: INFO : umount: umount passed Jan 13 23:41:46.046138 ignition[1339]: INFO : Ignition finished successfully Jan 13 23:41:46.054848 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 23:41:46.070949 kernel: audit: type=1131 audit(1768347706.056:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.070998 kernel: audit: type=1131 audit(1768347706.064:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.055135 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 23:41:46.088195 kernel: audit: type=1131 audit(1768347706.073:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.088247 kernel: audit: type=1131 audit(1768347706.080:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.058435 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 23:41:46.058630 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 23:41:46.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.104818 kernel: audit: type=1131 audit(1768347706.095:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.065745 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 23:41:46.065869 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 23:41:46.074636 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 23:41:46.074777 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 23:41:46.081505 systemd[1]: Stopped target network.target - Network. Jan 13 23:41:46.090120 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 23:41:46.090282 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 23:41:46.096251 systemd[1]: Stopped target paths.target - Path Units. Jan 13 23:41:46.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.105190 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 23:41:46.158160 kernel: audit: type=1131 audit(1768347706.139:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.158204 kernel: audit: type=1131 audit(1768347706.146:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.109577 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 23:41:46.109738 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 23:41:46.114520 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 23:41:46.121478 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 23:41:46.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.121578 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 23:41:46.196794 kernel: audit: type=1131 audit(1768347706.184:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.127557 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 23:41:46.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.127642 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 23:41:46.133686 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 13 23:41:46.206000 audit: BPF prog-id=6 op=UNLOAD Jan 13 23:41:46.133758 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 13 23:41:46.215000 audit: BPF prog-id=9 op=UNLOAD Jan 13 23:41:46.138368 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 23:41:46.138493 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 23:41:46.146796 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 23:41:46.146949 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 23:41:46.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.147881 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 23:41:46.151636 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 23:41:46.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.171791 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 23:41:46.172979 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 23:41:46.173217 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 23:41:46.194032 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 23:41:46.194243 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 23:41:46.210208 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 13 23:41:46.216656 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 23:41:46.216748 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 23:41:46.235194 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 23:41:46.242215 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 23:41:46.242353 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 23:41:46.247720 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 23:41:46.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.247836 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 23:41:46.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.253333 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 23:41:46.253449 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 23:41:46.258378 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 23:41:46.282235 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 23:41:46.293207 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 23:41:46.302996 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 23:41:46.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.304500 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 23:41:46.310191 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 23:41:46.310507 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 23:41:46.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.319305 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 23:41:46.319478 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 23:41:46.329262 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 23:41:46.329349 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 23:41:46.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.338815 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 23:41:46.338963 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 23:41:46.344983 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 23:41:46.345121 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 23:41:46.358043 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 23:41:46.358174 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 23:41:46.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.375805 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 23:41:46.383312 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 13 23:41:46.383436 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 23:41:46.386792 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 23:41:46.386892 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 23:41:46.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.408575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 23:41:46.408828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:41:46.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.435339 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 23:41:46.438215 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 23:41:46.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.449632 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 23:41:46.452252 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 23:41:46.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.457738 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 23:41:46.467376 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 23:41:46.501184 systemd[1]: Switching root. Jan 13 23:41:46.568160 systemd-journald[360]: Journal stopped Jan 13 23:41:50.425797 systemd-journald[360]: Received SIGTERM from PID 1 (systemd). Jan 13 23:41:50.425935 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 23:41:50.425989 kernel: SELinux: policy capability open_perms=1 Jan 13 23:41:50.428071 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 23:41:50.428121 kernel: SELinux: policy capability always_check_network=0 Jan 13 23:41:50.428155 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 23:41:50.428187 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 23:41:50.428217 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 23:41:50.428251 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 23:41:50.428294 kernel: SELinux: policy capability userspace_initial_context=0 Jan 13 23:41:50.428333 systemd[1]: Successfully loaded SELinux policy in 149.353ms. Jan 13 23:41:50.428385 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.363ms. Jan 13 23:41:50.428422 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 13 23:41:50.428457 systemd[1]: Detected virtualization amazon. Jan 13 23:41:50.428489 systemd[1]: Detected architecture arm64. Jan 13 23:41:50.428519 systemd[1]: Detected first boot. Jan 13 23:41:50.428553 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 13 23:41:50.428589 zram_generator::config[1383]: No configuration found. Jan 13 23:41:50.428634 kernel: NET: Registered PF_VSOCK protocol family Jan 13 23:41:50.428667 systemd[1]: Populated /etc with preset unit settings. Jan 13 23:41:50.428699 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 23:41:50.428732 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 23:41:50.428765 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 23:41:50.428798 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 23:41:50.428835 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 23:41:50.428867 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 23:41:50.428897 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 23:41:50.428931 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 23:41:50.428963 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 23:41:50.428996 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 23:41:50.433121 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 23:41:50.433179 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 23:41:50.433218 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 23:41:50.433249 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 23:41:50.433284 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 23:41:50.433317 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 23:41:50.433362 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 23:41:50.433392 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 23:41:50.433428 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 23:41:50.433460 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 23:41:50.433490 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 23:41:50.433530 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 23:41:50.433561 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 23:41:50.433592 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 23:41:50.433625 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 23:41:50.433658 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 23:41:50.433688 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 13 23:41:50.433719 systemd[1]: Reached target slices.target - Slice Units. Jan 13 23:41:50.433751 systemd[1]: Reached target swap.target - Swaps. Jan 13 23:41:50.433784 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 23:41:50.433815 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 23:41:50.433849 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 13 23:41:50.433881 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 13 23:41:50.433911 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 13 23:41:50.433941 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 23:41:50.433973 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 13 23:41:50.434003 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 13 23:41:50.434064 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 23:41:50.434105 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 23:41:50.434141 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 23:41:50.434174 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 23:41:50.434204 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 23:41:50.434235 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 23:41:50.434266 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 23:41:50.434298 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 23:41:50.434334 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 23:41:50.434367 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 23:41:50.434397 systemd[1]: Reached target machines.target - Containers. Jan 13 23:41:50.434428 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 23:41:50.434464 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:41:50.434495 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 23:41:50.434528 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 23:41:50.434574 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 23:41:50.434605 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 23:41:50.434637 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 23:41:50.434669 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 23:41:50.434699 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 23:41:50.434731 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 23:41:50.434765 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 23:41:50.434801 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 23:41:50.434832 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 23:41:50.434863 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 23:41:50.434923 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:41:50.434962 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 23:41:50.434997 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 23:41:50.450357 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 23:41:50.450421 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 23:41:50.450456 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 13 23:41:50.450490 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 23:41:50.450531 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 23:41:50.450566 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 23:41:50.450597 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 23:41:50.450627 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 23:41:50.450658 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 23:41:50.450688 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 23:41:50.450719 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 23:41:50.450758 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 23:41:50.450790 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 23:41:50.450820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 23:41:50.450856 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 23:41:50.450911 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 23:41:50.450956 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 23:41:50.450988 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 23:41:50.451041 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 23:41:50.451077 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 23:41:50.451112 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 23:41:50.451145 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 23:41:50.451177 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 23:41:50.451214 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 23:41:50.451248 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 23:41:50.451278 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 13 23:41:50.451308 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 13 23:41:50.451339 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 23:41:50.451370 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 23:41:50.451409 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 23:41:50.451445 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 13 23:41:50.451477 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:41:50.451509 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:41:50.451587 systemd-journald[1459]: Collecting audit messages is enabled. Jan 13 23:41:50.451642 kernel: fuse: init (API version 7.41) Jan 13 23:41:50.451673 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 23:41:50.451708 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 23:41:50.451738 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 23:41:50.451771 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 23:41:50.451807 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 23:41:50.451837 systemd-journald[1459]: Journal started Jan 13 23:41:50.451882 systemd-journald[1459]: Runtime Journal (/run/log/journal/ec2d2eddd77080d261d9ff9bfa3fa364) is 8M, max 75.3M, 67.3M free. Jan 13 23:41:49.859000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 13 23:41:50.462688 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 23:41:50.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.108000 audit: BPF prog-id=14 op=UNLOAD Jan 13 23:41:50.108000 audit: BPF prog-id=13 op=UNLOAD Jan 13 23:41:50.113000 audit: BPF prog-id=15 op=LOAD Jan 13 23:41:50.114000 audit: BPF prog-id=16 op=LOAD Jan 13 23:41:50.114000 audit: BPF prog-id=17 op=LOAD Jan 13 23:41:50.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.414000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 13 23:41:50.475008 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 23:41:50.414000 audit[1459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffeac7bf10 a2=4000 a3=0 items=0 ppid=1 pid=1459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:50.414000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 13 23:41:50.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:49.677786 systemd[1]: Queued start job for default target multi-user.target. Jan 13 23:41:49.701221 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 23:41:49.702217 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 23:41:50.470236 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 23:41:50.476365 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 23:41:50.542088 kernel: ACPI: bus type drm_connector registered Jan 13 23:41:50.544635 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 23:41:50.547188 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 23:41:50.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.560113 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 23:41:50.567620 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 23:41:50.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.570876 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 23:41:50.578366 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 13 23:41:50.591247 systemd-journald[1459]: Time spent on flushing to /var/log/journal/ec2d2eddd77080d261d9ff9bfa3fa364 is 139.118ms for 1054 entries. Jan 13 23:41:50.591247 systemd-journald[1459]: System Journal (/var/log/journal/ec2d2eddd77080d261d9ff9bfa3fa364) is 8M, max 588.1M, 580.1M free. Jan 13 23:41:50.755273 systemd-journald[1459]: Received client request to flush runtime journal. Jan 13 23:41:50.755365 kernel: loop1: detected capacity change from 0 to 100192 Jan 13 23:41:50.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.712222 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 23:41:50.738192 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 23:41:50.749158 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 23:41:50.765173 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 23:41:50.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.803168 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 23:41:50.808152 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 13 23:41:50.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.904263 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 23:41:50.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.911386 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 23:41:50.982645 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 23:41:50.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:50.993000 audit: BPF prog-id=18 op=LOAD Jan 13 23:41:50.994000 audit: BPF prog-id=19 op=LOAD Jan 13 23:41:50.994000 audit: BPF prog-id=20 op=LOAD Jan 13 23:41:50.998577 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 13 23:41:51.003000 audit: BPF prog-id=21 op=LOAD Jan 13 23:41:51.007378 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 23:41:51.013450 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 23:41:51.022090 kernel: loop2: detected capacity change from 0 to 61504 Jan 13 23:41:51.044262 kernel: kauditd_printk_skb: 74 callbacks suppressed Jan 13 23:41:51.044404 kernel: audit: type=1334 audit(1768347711.041:136): prog-id=22 op=LOAD Jan 13 23:41:51.041000 audit: BPF prog-id=22 op=LOAD Jan 13 23:41:51.046000 audit: BPF prog-id=23 op=LOAD Jan 13 23:41:51.051268 kernel: audit: type=1334 audit(1768347711.046:137): prog-id=23 op=LOAD Jan 13 23:41:51.051379 kernel: audit: type=1334 audit(1768347711.046:138): prog-id=24 op=LOAD Jan 13 23:41:51.046000 audit: BPF prog-id=24 op=LOAD Jan 13 23:41:51.051500 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 23:41:51.056000 audit: BPF prog-id=25 op=LOAD Jan 13 23:41:51.061228 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 13 23:41:51.071089 kernel: audit: type=1334 audit(1768347711.056:139): prog-id=25 op=LOAD Jan 13 23:41:51.071198 kernel: audit: type=1334 audit(1768347711.057:140): prog-id=26 op=LOAD Jan 13 23:41:51.071264 kernel: audit: type=1334 audit(1768347711.057:141): prog-id=27 op=LOAD Jan 13 23:41:51.057000 audit: BPF prog-id=26 op=LOAD Jan 13 23:41:51.057000 audit: BPF prog-id=27 op=LOAD Jan 13 23:41:51.121109 systemd-tmpfiles[1539]: ACLs are not supported, ignoring. Jan 13 23:41:51.121151 systemd-tmpfiles[1539]: ACLs are not supported, ignoring. Jan 13 23:41:51.144344 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 23:41:51.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:51.157072 kernel: audit: type=1130 audit(1768347711.146:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:51.235897 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 23:41:51.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:51.247059 kernel: audit: type=1130 audit(1768347711.237:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:51.248348 systemd-nsresourced[1542]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 13 23:41:51.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:51.257966 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 13 23:41:51.269078 kernel: audit: type=1130 audit(1768347711.260:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:51.443127 systemd-oomd[1537]: No swap; memory pressure usage will be degraded Jan 13 23:41:51.444454 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 13 23:41:51.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:51.454293 kernel: audit: type=1130 audit(1768347711.446:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:51.511889 systemd-resolved[1538]: Positive Trust Anchors: Jan 13 23:41:51.512479 kernel: loop3: detected capacity change from 0 to 45344 Jan 13 23:41:51.511929 systemd-resolved[1538]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 23:41:51.511939 systemd-resolved[1538]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 13 23:41:51.512005 systemd-resolved[1538]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 23:41:51.528651 systemd-resolved[1538]: Defaulting to hostname 'linux'. Jan 13 23:41:51.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:51.531146 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 23:41:51.534029 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 23:41:51.775145 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 23:41:51.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:51.777000 audit: BPF prog-id=8 op=UNLOAD Jan 13 23:41:51.777000 audit: BPF prog-id=7 op=UNLOAD Jan 13 23:41:51.779000 audit: BPF prog-id=28 op=LOAD Jan 13 23:41:51.779000 audit: BPF prog-id=29 op=LOAD Jan 13 23:41:51.781743 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 23:41:51.826078 kernel: loop4: detected capacity change from 0 to 207008 Jan 13 23:41:51.841257 systemd-udevd[1561]: Using default interface naming scheme 'v257'. Jan 13 23:41:52.037335 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 23:41:52.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:52.047000 audit: BPF prog-id=30 op=LOAD Jan 13 23:41:52.049522 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 23:41:52.135643 kernel: loop5: detected capacity change from 0 to 100192 Jan 13 23:41:52.158050 kernel: loop6: detected capacity change from 0 to 61504 Jan 13 23:41:52.183718 kernel: loop7: detected capacity change from 0 to 45344 Jan 13 23:41:52.209358 (udev-worker)[1579]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:41:52.215919 kernel: loop1: detected capacity change from 0 to 207008 Jan 13 23:41:52.217318 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 23:41:52.248063 systemd-networkd[1570]: lo: Link UP Jan 13 23:41:52.248106 systemd-networkd[1570]: lo: Gained carrier Jan 13 23:41:52.248256 (sd-merge)[1582]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Jan 13 23:41:52.251967 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 23:41:52.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:52.257521 systemd[1]: Reached target network.target - Network. Jan 13 23:41:52.267305 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 13 23:41:52.276137 (sd-merge)[1582]: Merged extensions into '/usr'. Jan 13 23:41:52.276794 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 23:41:52.299850 systemd[1]: Reload requested from client PID 1483 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 23:41:52.299884 systemd[1]: Reloading... Jan 13 23:41:52.458232 systemd-networkd[1570]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:41:52.458257 systemd-networkd[1570]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 23:41:52.489614 systemd-networkd[1570]: eth0: Link UP Jan 13 23:41:52.489956 systemd-networkd[1570]: eth0: Gained carrier Jan 13 23:41:52.489992 systemd-networkd[1570]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:41:52.527335 systemd-networkd[1570]: eth0: DHCPv4 address 172.31.22.81/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 23:41:52.546350 zram_generator::config[1642]: No configuration found. Jan 13 23:41:53.217047 systemd[1]: Reloading finished in 916 ms. Jan 13 23:41:53.245506 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 23:41:53.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:53.249764 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 13 23:41:53.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:53.316300 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 23:41:53.333315 systemd[1]: Starting ensure-sysext.service... Jan 13 23:41:53.340342 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 23:41:53.346410 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 23:41:53.357794 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:41:53.361000 audit: BPF prog-id=31 op=LOAD Jan 13 23:41:53.362000 audit: BPF prog-id=30 op=UNLOAD Jan 13 23:41:53.363000 audit: BPF prog-id=32 op=LOAD Jan 13 23:41:53.365000 audit: BPF prog-id=15 op=UNLOAD Jan 13 23:41:53.365000 audit: BPF prog-id=33 op=LOAD Jan 13 23:41:53.365000 audit: BPF prog-id=34 op=LOAD Jan 13 23:41:53.365000 audit: BPF prog-id=16 op=UNLOAD Jan 13 23:41:53.365000 audit: BPF prog-id=17 op=UNLOAD Jan 13 23:41:53.367000 audit: BPF prog-id=35 op=LOAD Jan 13 23:41:53.368000 audit: BPF prog-id=18 op=UNLOAD Jan 13 23:41:53.368000 audit: BPF prog-id=36 op=LOAD Jan 13 23:41:53.368000 audit: BPF prog-id=37 op=LOAD Jan 13 23:41:53.368000 audit: BPF prog-id=19 op=UNLOAD Jan 13 23:41:53.368000 audit: BPF prog-id=20 op=UNLOAD Jan 13 23:41:53.369000 audit: BPF prog-id=38 op=LOAD Jan 13 23:41:53.370000 audit: BPF prog-id=39 op=LOAD Jan 13 23:41:53.370000 audit: BPF prog-id=28 op=UNLOAD Jan 13 23:41:53.370000 audit: BPF prog-id=29 op=UNLOAD Jan 13 23:41:53.373000 audit: BPF prog-id=40 op=LOAD Jan 13 23:41:53.373000 audit: BPF prog-id=21 op=UNLOAD Jan 13 23:41:53.379000 audit: BPF prog-id=41 op=LOAD Jan 13 23:41:53.379000 audit: BPF prog-id=25 op=UNLOAD Jan 13 23:41:53.379000 audit: BPF prog-id=42 op=LOAD Jan 13 23:41:53.379000 audit: BPF prog-id=43 op=LOAD Jan 13 23:41:53.379000 audit: BPF prog-id=26 op=UNLOAD Jan 13 23:41:53.379000 audit: BPF prog-id=27 op=UNLOAD Jan 13 23:41:53.382000 audit: BPF prog-id=44 op=LOAD Jan 13 23:41:53.382000 audit: BPF prog-id=22 op=UNLOAD Jan 13 23:41:53.382000 audit: BPF prog-id=45 op=LOAD Jan 13 23:41:53.384000 audit: BPF prog-id=46 op=LOAD Jan 13 23:41:53.388000 audit: BPF prog-id=23 op=UNLOAD Jan 13 23:41:53.388000 audit: BPF prog-id=24 op=UNLOAD Jan 13 23:41:53.410289 systemd[1]: Reload requested from client PID 1781 ('systemctl') (unit ensure-sysext.service)... Jan 13 23:41:53.410486 systemd[1]: Reloading... Jan 13 23:41:53.419534 systemd-tmpfiles[1783]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 13 23:41:53.419624 systemd-tmpfiles[1783]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 13 23:41:53.420334 systemd-tmpfiles[1783]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 23:41:53.423739 systemd-tmpfiles[1783]: ACLs are not supported, ignoring. Jan 13 23:41:53.423909 systemd-tmpfiles[1783]: ACLs are not supported, ignoring. Jan 13 23:41:53.445220 systemd-tmpfiles[1783]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 23:41:53.445253 systemd-tmpfiles[1783]: Skipping /boot Jan 13 23:41:53.475036 systemd-tmpfiles[1783]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 23:41:53.475086 systemd-tmpfiles[1783]: Skipping /boot Jan 13 23:41:53.605072 zram_generator::config[1826]: No configuration found. Jan 13 23:41:53.689251 systemd-networkd[1570]: eth0: Gained IPv6LL Jan 13 23:41:54.072945 systemd[1]: Reloading finished in 661 ms. Jan 13 23:41:54.092496 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 23:41:54.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.097000 audit: BPF prog-id=47 op=LOAD Jan 13 23:41:54.097000 audit: BPF prog-id=40 op=UNLOAD Jan 13 23:41:54.098000 audit: BPF prog-id=48 op=LOAD Jan 13 23:41:54.098000 audit: BPF prog-id=44 op=UNLOAD Jan 13 23:41:54.099000 audit: BPF prog-id=49 op=LOAD Jan 13 23:41:54.099000 audit: BPF prog-id=50 op=LOAD Jan 13 23:41:54.099000 audit: BPF prog-id=45 op=UNLOAD Jan 13 23:41:54.099000 audit: BPF prog-id=46 op=UNLOAD Jan 13 23:41:54.100000 audit: BPF prog-id=51 op=LOAD Jan 13 23:41:54.100000 audit: BPF prog-id=31 op=UNLOAD Jan 13 23:41:54.103000 audit: BPF prog-id=52 op=LOAD Jan 13 23:41:54.103000 audit: BPF prog-id=32 op=UNLOAD Jan 13 23:41:54.103000 audit: BPF prog-id=53 op=LOAD Jan 13 23:41:54.103000 audit: BPF prog-id=54 op=LOAD Jan 13 23:41:54.103000 audit: BPF prog-id=33 op=UNLOAD Jan 13 23:41:54.103000 audit: BPF prog-id=34 op=UNLOAD Jan 13 23:41:54.113000 audit: BPF prog-id=55 op=LOAD Jan 13 23:41:54.113000 audit: BPF prog-id=35 op=UNLOAD Jan 13 23:41:54.113000 audit: BPF prog-id=56 op=LOAD Jan 13 23:41:54.113000 audit: BPF prog-id=57 op=LOAD Jan 13 23:41:54.113000 audit: BPF prog-id=36 op=UNLOAD Jan 13 23:41:54.114000 audit: BPF prog-id=37 op=UNLOAD Jan 13 23:41:54.114000 audit: BPF prog-id=58 op=LOAD Jan 13 23:41:54.114000 audit: BPF prog-id=59 op=LOAD Jan 13 23:41:54.114000 audit: BPF prog-id=38 op=UNLOAD Jan 13 23:41:54.114000 audit: BPF prog-id=39 op=UNLOAD Jan 13 23:41:54.116000 audit: BPF prog-id=60 op=LOAD Jan 13 23:41:54.116000 audit: BPF prog-id=41 op=UNLOAD Jan 13 23:41:54.116000 audit: BPF prog-id=61 op=LOAD Jan 13 23:41:54.116000 audit: BPF prog-id=62 op=LOAD Jan 13 23:41:54.116000 audit: BPF prog-id=42 op=UNLOAD Jan 13 23:41:54.116000 audit: BPF prog-id=43 op=UNLOAD Jan 13 23:41:54.124557 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 23:41:54.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.131941 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 23:41:54.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.138439 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:41:54.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.160462 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 23:41:54.165817 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 23:41:54.175456 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 23:41:54.182470 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 23:41:54.191844 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 23:41:54.199665 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 23:41:54.218611 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:41:54.222884 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 23:41:54.232806 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 23:41:54.240585 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 23:41:54.243264 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:41:54.243686 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:41:54.243945 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:41:54.252864 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:41:54.254424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:41:54.254786 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:41:54.255070 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:41:54.260000 audit[1883]: SYSTEM_BOOT pid=1883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.275004 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:41:54.288194 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 23:41:54.290805 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:41:54.291061 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:41:54.291144 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:41:54.291233 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 23:41:54.302194 systemd[1]: Finished ensure-sysext.service. Jan 13 23:41:54.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.310137 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 23:41:54.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.313780 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 23:41:54.315386 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 23:41:54.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.331549 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 23:41:54.332201 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 23:41:54.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.335999 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 23:41:54.337427 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 23:41:54.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.341052 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 23:41:54.342154 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 23:41:54.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.357837 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 23:41:54.357980 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 23:41:54.376906 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 23:41:54.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.463000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 13 23:41:54.463000 audit[1914]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffc6e4e60 a2=420 a3=0 items=0 ppid=1879 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:54.463000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 13 23:41:54.465687 augenrules[1914]: No rules Jan 13 23:41:54.468251 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 23:41:54.469707 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 23:41:54.517128 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 23:41:54.520731 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 23:41:57.618848 ldconfig[1881]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 23:41:57.633131 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 23:41:57.638769 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 23:41:57.669130 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 23:41:57.672405 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 23:41:57.675143 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 23:41:57.678136 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 23:41:57.682600 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 23:41:57.685460 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 23:41:57.688997 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 13 23:41:57.692265 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 13 23:41:57.694857 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 23:41:57.697678 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 23:41:57.697871 systemd[1]: Reached target paths.target - Path Units. Jan 13 23:41:57.700523 systemd[1]: Reached target timers.target - Timer Units. Jan 13 23:41:57.704554 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 23:41:57.709991 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 23:41:57.716440 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 13 23:41:57.720061 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 13 23:41:57.723098 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 13 23:41:57.730307 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 23:41:57.733392 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 13 23:41:57.737611 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 23:41:57.740532 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 23:41:57.742740 systemd[1]: Reached target basic.target - Basic System. Jan 13 23:41:57.745438 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 23:41:57.745501 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 23:41:57.749196 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 23:41:57.756307 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 23:41:57.768439 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 23:41:57.773764 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 23:41:57.780264 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 23:41:57.786483 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 23:41:57.790150 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 23:41:57.792992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:41:57.802184 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 23:41:57.812545 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 23:41:57.822433 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 23:41:57.852361 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 23:41:57.864429 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 23:41:57.873665 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 23:41:57.880437 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 23:41:57.889158 jq[1931]: false Jan 13 23:41:57.892416 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 23:41:57.895592 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 23:41:57.898596 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 23:41:57.901798 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 23:41:57.907322 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 23:41:57.919541 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 23:41:57.923981 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 23:41:57.946595 extend-filesystems[1932]: Found /dev/nvme0n1p6 Jan 13 23:41:57.926661 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 23:41:57.982143 extend-filesystems[1932]: Found /dev/nvme0n1p9 Jan 13 23:41:58.023252 extend-filesystems[1932]: Checking size of /dev/nvme0n1p9 Jan 13 23:41:58.051363 extend-filesystems[1932]: Resized partition /dev/nvme0n1p9 Jan 13 23:41:58.053645 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 23:41:58.054288 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 23:41:58.066073 extend-filesystems[1989]: resize2fs 1.47.3 (8-Jul-2025) Jan 13 23:41:58.078662 jq[1949]: true Jan 13 23:41:58.085073 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Jan 13 23:41:58.085152 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 23:41:58.094897 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 23:41:58.126775 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 23:41:58.138293 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Jan 13 23:41:58.130128 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 23:41:58.159678 ntpd[1935]: ntpd 4.2.8p18@1.4062-o Tue Jan 13 21:06:29 UTC 2026 (1): Starting Jan 13 23:41:58.161238 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: ntpd 4.2.8p18@1.4062-o Tue Jan 13 21:06:29 UTC 2026 (1): Starting Jan 13 23:41:58.161238 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 23:41:58.161238 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: ---------------------------------------------------- Jan 13 23:41:58.161238 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: ntp-4 is maintained by Network Time Foundation, Jan 13 23:41:58.161238 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 23:41:58.161238 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: corporation. Support and training for ntp-4 are Jan 13 23:41:58.161238 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: available at https://www.nwtime.org/support Jan 13 23:41:58.161238 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: ---------------------------------------------------- Jan 13 23:41:58.159797 ntpd[1935]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 23:41:58.166971 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 23:41:58.182986 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: proto: precision = 0.096 usec (-23) Jan 13 23:41:58.183152 extend-filesystems[1989]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 23:41:58.183152 extend-filesystems[1989]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 13 23:41:58.183152 extend-filesystems[1989]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Jan 13 23:41:58.159817 ntpd[1935]: ---------------------------------------------------- Jan 13 23:41:58.170253 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 23:41:58.195684 tar[1957]: linux-arm64/LICENSE Jan 13 23:41:58.196096 extend-filesystems[1932]: Resized filesystem in /dev/nvme0n1p9 Jan 13 23:41:58.159834 ntpd[1935]: ntp-4 is maintained by Network Time Foundation, Jan 13 23:41:58.170780 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 23:41:58.200403 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: basedate set to 2026-01-01 Jan 13 23:41:58.200403 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: gps base set to 2026-01-04 (week 2400) Jan 13 23:41:58.200403 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 23:41:58.200403 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 23:41:58.159851 ntpd[1935]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 23:41:58.159868 ntpd[1935]: corporation. Support and training for ntp-4 are Jan 13 23:41:58.159885 ntpd[1935]: available at https://www.nwtime.org/support Jan 13 23:41:58.159901 ntpd[1935]: ---------------------------------------------------- Jan 13 23:41:58.181334 ntpd[1935]: proto: precision = 0.096 usec (-23) Jan 13 23:41:58.199687 ntpd[1935]: basedate set to 2026-01-01 Jan 13 23:41:58.199722 ntpd[1935]: gps base set to 2026-01-04 (week 2400) Jan 13 23:41:58.199920 ntpd[1935]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 23:41:58.199966 ntpd[1935]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 23:41:58.212988 tar[1957]: linux-arm64/helm Jan 13 23:41:58.213170 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 23:41:58.213170 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: Listen normally on 3 eth0 172.31.22.81:123 Jan 13 23:41:58.213170 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: Listen normally on 4 lo [::1]:123 Jan 13 23:41:58.213170 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: Listen normally on 5 eth0 [fe80::470:e1ff:fef1:5d43%2]:123 Jan 13 23:41:58.213170 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: Listening on routing socket on fd #22 for interface updates Jan 13 23:41:58.203041 ntpd[1935]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 23:41:58.203103 ntpd[1935]: Listen normally on 3 eth0 172.31.22.81:123 Jan 13 23:41:58.203153 ntpd[1935]: Listen normally on 4 lo [::1]:123 Jan 13 23:41:58.203198 ntpd[1935]: Listen normally on 5 eth0 [fe80::470:e1ff:fef1:5d43%2]:123 Jan 13 23:41:58.206072 ntpd[1935]: Listening on routing socket on fd #22 for interface updates Jan 13 23:41:58.226116 update_engine[1948]: I20260113 23:41:58.224117 1948 main.cc:92] Flatcar Update Engine starting Jan 13 23:41:58.236387 jq[1995]: true Jan 13 23:41:58.251952 ntpd[1935]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 23:41:58.253518 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 23:41:58.261052 ntpd[1935]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 23:41:58.262109 ntpd[1935]: 13 Jan 23:41:58 ntpd[1935]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 23:41:58.265096 dbus-daemon[1929]: [system] SELinux support is enabled Jan 13 23:41:58.266003 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 23:41:58.278628 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 23:41:58.278704 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 23:41:58.281828 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 23:41:58.281887 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 23:41:58.317621 dbus-daemon[1929]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1570 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 23:41:58.341545 dbus-daemon[1929]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 23:41:58.355441 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 23:41:58.358284 systemd[1]: Started update-engine.service - Update Engine. Jan 13 23:41:58.363788 update_engine[1948]: I20260113 23:41:58.362942 1948 update_check_scheduler.cc:74] Next update check in 9m6s Jan 13 23:41:58.404533 systemd-logind[1947]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 23:41:58.404609 systemd-logind[1947]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 13 23:41:58.405123 systemd-logind[1947]: New seat seat0. Jan 13 23:41:58.408507 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 23:41:58.411522 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 23:41:58.470065 amazon-ssm-agent[1996]: Initializing new seelog logger Jan 13 23:41:58.470065 amazon-ssm-agent[1996]: New Seelog Logger Creation Complete Jan 13 23:41:58.470065 amazon-ssm-agent[1996]: 2026/01/13 23:41:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:41:58.470065 amazon-ssm-agent[1996]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:41:58.475047 amazon-ssm-agent[1996]: 2026/01/13 23:41:58 processing appconfig overrides Jan 13 23:41:58.475047 amazon-ssm-agent[1996]: 2026/01/13 23:41:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:41:58.475047 amazon-ssm-agent[1996]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:41:58.475047 amazon-ssm-agent[1996]: 2026/01/13 23:41:58 processing appconfig overrides Jan 13 23:41:58.475047 amazon-ssm-agent[1996]: 2026-01-13 23:41:58.4718 INFO Proxy environment variables: Jan 13 23:41:58.475047 amazon-ssm-agent[1996]: 2026/01/13 23:41:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:41:58.475047 amazon-ssm-agent[1996]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:41:58.475047 amazon-ssm-agent[1996]: 2026/01/13 23:41:58 processing appconfig overrides Jan 13 23:41:58.508708 amazon-ssm-agent[1996]: 2026/01/13 23:41:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:41:58.508708 amazon-ssm-agent[1996]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:41:58.508708 amazon-ssm-agent[1996]: 2026/01/13 23:41:58 processing appconfig overrides Jan 13 23:41:58.573894 coreos-metadata[1928]: Jan 13 23:41:58.573 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 23:41:58.586881 coreos-metadata[1928]: Jan 13 23:41:58.582 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 23:41:58.591108 coreos-metadata[1928]: Jan 13 23:41:58.590 INFO Fetch successful Jan 13 23:41:58.591108 coreos-metadata[1928]: Jan 13 23:41:58.590 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 23:41:58.595833 coreos-metadata[1928]: Jan 13 23:41:58.595 INFO Fetch successful Jan 13 23:41:58.595833 coreos-metadata[1928]: Jan 13 23:41:58.595 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 23:41:58.598472 coreos-metadata[1928]: Jan 13 23:41:58.598 INFO Fetch successful Jan 13 23:41:58.598472 coreos-metadata[1928]: Jan 13 23:41:58.598 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 23:41:58.601371 coreos-metadata[1928]: Jan 13 23:41:58.601 INFO Fetch successful Jan 13 23:41:58.601371 coreos-metadata[1928]: Jan 13 23:41:58.601 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 23:41:58.626391 coreos-metadata[1928]: Jan 13 23:41:58.607 INFO Fetch failed with 404: resource not found Jan 13 23:41:58.626391 coreos-metadata[1928]: Jan 13 23:41:58.607 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 23:41:58.626391 coreos-metadata[1928]: Jan 13 23:41:58.610 INFO Fetch successful Jan 13 23:41:58.626391 coreos-metadata[1928]: Jan 13 23:41:58.610 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 23:41:58.626391 coreos-metadata[1928]: Jan 13 23:41:58.611 INFO Fetch successful Jan 13 23:41:58.626391 coreos-metadata[1928]: Jan 13 23:41:58.611 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 23:41:58.626391 coreos-metadata[1928]: Jan 13 23:41:58.611 INFO Fetch successful Jan 13 23:41:58.626391 coreos-metadata[1928]: Jan 13 23:41:58.611 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 23:41:58.626391 coreos-metadata[1928]: Jan 13 23:41:58.613 INFO Fetch successful Jan 13 23:41:58.626391 coreos-metadata[1928]: Jan 13 23:41:58.613 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 23:41:58.626917 amazon-ssm-agent[1996]: 2026-01-13 23:41:58.4719 INFO http_proxy: Jan 13 23:41:58.630889 coreos-metadata[1928]: Jan 13 23:41:58.627 INFO Fetch successful Jan 13 23:41:58.673805 bash[2039]: Updated "/home/core/.ssh/authorized_keys" Jan 13 23:41:58.677473 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 23:41:58.689877 systemd[1]: Starting sshkeys.service... Jan 13 23:41:58.725751 amazon-ssm-agent[1996]: 2026-01-13 23:41:58.4719 INFO no_proxy: Jan 13 23:41:58.731335 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 23:41:58.743842 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 23:41:58.790759 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 23:41:58.794507 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 23:41:58.828757 amazon-ssm-agent[1996]: 2026-01-13 23:41:58.4719 INFO https_proxy: Jan 13 23:41:58.928275 amazon-ssm-agent[1996]: 2026-01-13 23:41:58.4721 INFO Checking if agent identity type OnPrem can be assumed Jan 13 23:41:59.002052 containerd[1990]: time="2026-01-13T23:41:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 13 23:41:59.002052 containerd[1990]: time="2026-01-13T23:41:59.001736599Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 13 23:41:59.036450 amazon-ssm-agent[1996]: 2026-01-13 23:41:58.4722 INFO Checking if agent identity type EC2 can be assumed Jan 13 23:41:59.054915 containerd[1990]: time="2026-01-13T23:41:59.052577059Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.144µs" Jan 13 23:41:59.054915 containerd[1990]: time="2026-01-13T23:41:59.052657963Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 13 23:41:59.054915 containerd[1990]: time="2026-01-13T23:41:59.052741483Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 13 23:41:59.054915 containerd[1990]: time="2026-01-13T23:41:59.052774723Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 13 23:41:59.054915 containerd[1990]: time="2026-01-13T23:41:59.053108635Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 13 23:41:59.054915 containerd[1990]: time="2026-01-13T23:41:59.053162875Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 13 23:41:59.054915 containerd[1990]: time="2026-01-13T23:41:59.053296735Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 13 23:41:59.054915 containerd[1990]: time="2026-01-13T23:41:59.053336263Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 13 23:41:59.054915 containerd[1990]: time="2026-01-13T23:41:59.053896027Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 13 23:41:59.054915 containerd[1990]: time="2026-01-13T23:41:59.053942251Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 13 23:41:59.054915 containerd[1990]: time="2026-01-13T23:41:59.053981131Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 13 23:41:59.061552 containerd[1990]: time="2026-01-13T23:41:59.054009163Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 13 23:41:59.061552 containerd[1990]: time="2026-01-13T23:41:59.060136303Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 13 23:41:59.061552 containerd[1990]: time="2026-01-13T23:41:59.060193699Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 13 23:41:59.061552 containerd[1990]: time="2026-01-13T23:41:59.060426415Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 13 23:41:59.061552 containerd[1990]: time="2026-01-13T23:41:59.060813835Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 13 23:41:59.061552 containerd[1990]: time="2026-01-13T23:41:59.060881539Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 13 23:41:59.061552 containerd[1990]: time="2026-01-13T23:41:59.060910891Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 13 23:41:59.061552 containerd[1990]: time="2026-01-13T23:41:59.060971035Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 13 23:41:59.068857 containerd[1990]: time="2026-01-13T23:41:59.065104123Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 13 23:41:59.068857 containerd[1990]: time="2026-01-13T23:41:59.065320231Z" level=info msg="metadata content store policy set" policy=shared Jan 13 23:41:59.076057 containerd[1990]: time="2026-01-13T23:41:59.074636779Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 13 23:41:59.076057 containerd[1990]: time="2026-01-13T23:41:59.074765563Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 13 23:41:59.076057 containerd[1990]: time="2026-01-13T23:41:59.075194876Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 13 23:41:59.076057 containerd[1990]: time="2026-01-13T23:41:59.075225692Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 13 23:41:59.076057 containerd[1990]: time="2026-01-13T23:41:59.075256700Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 13 23:41:59.076057 containerd[1990]: time="2026-01-13T23:41:59.075286820Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 13 23:41:59.076057 containerd[1990]: time="2026-01-13T23:41:59.075314684Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 13 23:41:59.076057 containerd[1990]: time="2026-01-13T23:41:59.075341492Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 13 23:41:59.076057 containerd[1990]: time="2026-01-13T23:41:59.075368912Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 13 23:41:59.076057 containerd[1990]: time="2026-01-13T23:41:59.075400040Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 13 23:41:59.076057 containerd[1990]: time="2026-01-13T23:41:59.075432152Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 13 23:41:59.076057 containerd[1990]: time="2026-01-13T23:41:59.075460616Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 13 23:41:59.076057 containerd[1990]: time="2026-01-13T23:41:59.075494312Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 13 23:41:59.076057 containerd[1990]: time="2026-01-13T23:41:59.075533768Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 13 23:41:59.076726 containerd[1990]: time="2026-01-13T23:41:59.075789260Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 13 23:41:59.076726 containerd[1990]: time="2026-01-13T23:41:59.075851192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 13 23:41:59.076726 containerd[1990]: time="2026-01-13T23:41:59.075885536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 13 23:41:59.076726 containerd[1990]: time="2026-01-13T23:41:59.075913820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 13 23:41:59.076726 containerd[1990]: time="2026-01-13T23:41:59.075942104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 13 23:41:59.076726 containerd[1990]: time="2026-01-13T23:41:59.075967352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 13 23:41:59.076726 containerd[1990]: time="2026-01-13T23:41:59.075997688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 13 23:41:59.082442 containerd[1990]: time="2026-01-13T23:41:59.079163372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 13 23:41:59.082442 containerd[1990]: time="2026-01-13T23:41:59.079285952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 13 23:41:59.082442 containerd[1990]: time="2026-01-13T23:41:59.079319528Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 13 23:41:59.082442 containerd[1990]: time="2026-01-13T23:41:59.079742084Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 13 23:41:59.082442 containerd[1990]: time="2026-01-13T23:41:59.079863092Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 13 23:41:59.082442 containerd[1990]: time="2026-01-13T23:41:59.079963964Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 13 23:41:59.082442 containerd[1990]: time="2026-01-13T23:41:59.080044292Z" level=info msg="Start snapshots syncer" Jan 13 23:41:59.082442 containerd[1990]: time="2026-01-13T23:41:59.080972816Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 13 23:41:59.082874 containerd[1990]: time="2026-01-13T23:41:59.082399268Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 13 23:41:59.088405 containerd[1990]: time="2026-01-13T23:41:59.085346756Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 13 23:41:59.088405 containerd[1990]: time="2026-01-13T23:41:59.085574732Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 13 23:41:59.088405 containerd[1990]: time="2026-01-13T23:41:59.085988420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 13 23:41:59.088405 containerd[1990]: time="2026-01-13T23:41:59.086066948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 13 23:41:59.088405 containerd[1990]: time="2026-01-13T23:41:59.086097620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 13 23:41:59.088405 containerd[1990]: time="2026-01-13T23:41:59.086149496Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 13 23:41:59.088405 containerd[1990]: time="2026-01-13T23:41:59.086184128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 13 23:41:59.088405 containerd[1990]: time="2026-01-13T23:41:59.086241512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 13 23:41:59.088405 containerd[1990]: time="2026-01-13T23:41:59.086272616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 13 23:41:59.088405 containerd[1990]: time="2026-01-13T23:41:59.086327336Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 13 23:41:59.088405 containerd[1990]: time="2026-01-13T23:41:59.086367944Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 13 23:41:59.088405 containerd[1990]: time="2026-01-13T23:41:59.086501876Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 13 23:41:59.088405 containerd[1990]: time="2026-01-13T23:41:59.087176636Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 13 23:41:59.088405 containerd[1990]: time="2026-01-13T23:41:59.087216764Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 13 23:41:59.089419 containerd[1990]: time="2026-01-13T23:41:59.087259004Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 13 23:41:59.089419 containerd[1990]: time="2026-01-13T23:41:59.087287540Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 13 23:41:59.089419 containerd[1990]: time="2026-01-13T23:41:59.087315404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 13 23:41:59.089419 containerd[1990]: time="2026-01-13T23:41:59.087342308Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 13 23:41:59.089419 containerd[1990]: time="2026-01-13T23:41:59.087504752Z" level=info msg="runtime interface created" Jan 13 23:41:59.089419 containerd[1990]: time="2026-01-13T23:41:59.087527240Z" level=info msg="created NRI interface" Jan 13 23:41:59.089419 containerd[1990]: time="2026-01-13T23:41:59.087550232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 13 23:41:59.089419 containerd[1990]: time="2026-01-13T23:41:59.087579908Z" level=info msg="Connect containerd service" Jan 13 23:41:59.089419 containerd[1990]: time="2026-01-13T23:41:59.087641504Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 23:41:59.097661 containerd[1990]: time="2026-01-13T23:41:59.094884152Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 23:41:59.141108 amazon-ssm-agent[1996]: 2026-01-13 23:41:59.0506 INFO Agent will take identity from EC2 Jan 13 23:41:59.149690 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 23:41:59.160319 dbus-daemon[1929]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 23:41:59.167430 dbus-daemon[1929]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2010 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 23:41:59.179947 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 23:41:59.241408 amazon-ssm-agent[1996]: 2026-01-13 23:41:59.0593 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 13 23:41:59.295941 coreos-metadata[2059]: Jan 13 23:41:59.294 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 23:41:59.298989 coreos-metadata[2059]: Jan 13 23:41:59.297 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 23:41:59.310955 coreos-metadata[2059]: Jan 13 23:41:59.308 INFO Fetch successful Jan 13 23:41:59.310955 coreos-metadata[2059]: Jan 13 23:41:59.309 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 23:41:59.318061 coreos-metadata[2059]: Jan 13 23:41:59.312 INFO Fetch successful Jan 13 23:41:59.320192 unknown[2059]: wrote ssh authorized keys file for user: core Jan 13 23:41:59.341058 amazon-ssm-agent[1996]: 2026-01-13 23:41:59.0593 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 13 23:41:59.427129 update-ssh-keys[2147]: Updated "/home/core/.ssh/authorized_keys" Jan 13 23:41:59.430686 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 23:41:59.441331 systemd[1]: Finished sshkeys.service. Jan 13 23:41:59.451128 amazon-ssm-agent[1996]: 2026-01-13 23:41:59.0593 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 23:41:59.549338 amazon-ssm-agent[1996]: 2026-01-13 23:41:59.0593 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 13 23:41:59.634493 locksmithd[2011]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 23:41:59.649960 amazon-ssm-agent[1996]: 2026-01-13 23:41:59.0593 INFO [Registrar] Starting registrar module Jan 13 23:41:59.685748 containerd[1990]: time="2026-01-13T23:41:59.678615382Z" level=info msg="Start subscribing containerd event" Jan 13 23:41:59.685748 containerd[1990]: time="2026-01-13T23:41:59.678741046Z" level=info msg="Start recovering state" Jan 13 23:41:59.685748 containerd[1990]: time="2026-01-13T23:41:59.679859123Z" level=info msg="Start event monitor" Jan 13 23:41:59.685748 containerd[1990]: time="2026-01-13T23:41:59.679901999Z" level=info msg="Start cni network conf syncer for default" Jan 13 23:41:59.685748 containerd[1990]: time="2026-01-13T23:41:59.679969871Z" level=info msg="Start streaming server" Jan 13 23:41:59.685748 containerd[1990]: time="2026-01-13T23:41:59.679990091Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 13 23:41:59.685748 containerd[1990]: time="2026-01-13T23:41:59.680007395Z" level=info msg="runtime interface starting up..." Jan 13 23:41:59.685748 containerd[1990]: time="2026-01-13T23:41:59.680102063Z" level=info msg="starting plugins..." Jan 13 23:41:59.685748 containerd[1990]: time="2026-01-13T23:41:59.680137499Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 13 23:41:59.685748 containerd[1990]: time="2026-01-13T23:41:59.682218203Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 23:41:59.685748 containerd[1990]: time="2026-01-13T23:41:59.682326287Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 23:41:59.685748 containerd[1990]: time="2026-01-13T23:41:59.682443647Z" level=info msg="containerd successfully booted in 0.684436s" Jan 13 23:41:59.682683 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 23:41:59.703580 polkitd[2125]: Started polkitd version 126 Jan 13 23:41:59.754220 amazon-ssm-agent[1996]: 2026-01-13 23:41:59.0694 INFO [EC2Identity] Checking disk for registration info Jan 13 23:41:59.801609 polkitd[2125]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 23:41:59.806336 polkitd[2125]: Loading rules from directory /run/polkit-1/rules.d Jan 13 23:41:59.806450 polkitd[2125]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 13 23:41:59.807129 polkitd[2125]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 13 23:41:59.807210 polkitd[2125]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 13 23:41:59.807301 polkitd[2125]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 23:41:59.811501 polkitd[2125]: Finished loading, compiling and executing 2 rules Jan 13 23:41:59.815968 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 23:41:59.821167 dbus-daemon[1929]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 23:41:59.823119 polkitd[2125]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 23:41:59.850143 amazon-ssm-agent[1996]: 2026-01-13 23:41:59.0703 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 13 23:41:59.889593 systemd-hostnamed[2010]: Hostname set to (transient) Jan 13 23:41:59.889632 systemd-resolved[1538]: System hostname changed to 'ip-172-31-22-81'. Jan 13 23:41:59.950508 amazon-ssm-agent[1996]: 2026-01-13 23:41:59.0703 INFO [EC2Identity] Generating registration keypair Jan 13 23:42:00.442417 tar[1957]: linux-arm64/README.md Jan 13 23:42:00.503136 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 23:42:00.560847 sshd_keygen[1980]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 23:42:00.606680 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 23:42:00.613670 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 23:42:00.640145 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 23:42:00.640689 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 23:42:00.646910 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 23:42:00.681816 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 23:42:00.690631 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 23:42:00.702194 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 23:42:00.705226 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 23:42:00.778834 amazon-ssm-agent[1996]: 2026-01-13 23:42:00.7785 INFO [EC2Identity] Checking write access before registering Jan 13 23:42:00.825591 amazon-ssm-agent[1996]: 2026/01/13 23:42:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:42:00.825805 amazon-ssm-agent[1996]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:42:00.826046 amazon-ssm-agent[1996]: 2026/01/13 23:42:00 processing appconfig overrides Jan 13 23:42:00.855625 amazon-ssm-agent[1996]: 2026-01-13 23:42:00.7799 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 13 23:42:00.855625 amazon-ssm-agent[1996]: 2026-01-13 23:42:00.8253 INFO [EC2Identity] EC2 registration was successful. Jan 13 23:42:00.855625 amazon-ssm-agent[1996]: 2026-01-13 23:42:00.8253 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 13 23:42:00.855852 amazon-ssm-agent[1996]: 2026-01-13 23:42:00.8254 INFO [CredentialRefresher] credentialRefresher has started Jan 13 23:42:00.855852 amazon-ssm-agent[1996]: 2026-01-13 23:42:00.8255 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 23:42:00.855852 amazon-ssm-agent[1996]: 2026-01-13 23:42:00.8552 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 23:42:00.855852 amazon-ssm-agent[1996]: 2026-01-13 23:42:00.8555 INFO [CredentialRefresher] Credentials ready Jan 13 23:42:00.879481 amazon-ssm-agent[1996]: 2026-01-13 23:42:00.8557 INFO [CredentialRefresher] Next credential rotation will be in 29.9999918079 minutes Jan 13 23:42:01.883601 amazon-ssm-agent[1996]: 2026-01-13 23:42:01.8834 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 23:42:01.985061 amazon-ssm-agent[1996]: 2026-01-13 23:42:01.8872 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2204) started Jan 13 23:42:02.085396 amazon-ssm-agent[1996]: 2026-01-13 23:42:01.8872 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 23:42:03.270880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:42:03.275263 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 23:42:03.279626 systemd[1]: Startup finished in 4.218s (kernel) + 12.200s (initrd) + 16.177s (userspace) = 32.597s. Jan 13 23:42:03.289050 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:42:04.012716 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 23:42:04.015303 systemd[1]: Started sshd@0-172.31.22.81:22-20.161.92.111:39078.service - OpenSSH per-connection server daemon (20.161.92.111:39078). Jan 13 23:42:04.670476 sshd[2230]: Accepted publickey for core from 20.161.92.111 port 39078 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:04.674981 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:04.693586 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 23:42:04.696946 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 23:42:04.719138 systemd-logind[1947]: New session 1 of user core. Jan 13 23:42:04.747207 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 23:42:04.755504 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 23:42:04.781566 (systemd)[2236]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:04.786989 systemd-logind[1947]: New session 2 of user core. Jan 13 23:42:05.138368 systemd[2236]: Queued start job for default target default.target. Jan 13 23:42:05.144304 systemd[2236]: Created slice app.slice - User Application Slice. Jan 13 23:42:05.144384 systemd[2236]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 13 23:42:05.144419 systemd[2236]: Reached target paths.target - Paths. Jan 13 23:42:05.144842 systemd[2236]: Reached target timers.target - Timers. Jan 13 23:42:05.148123 systemd[2236]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 23:42:05.153473 systemd[2236]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 13 23:42:04.812925 systemd-resolved[1538]: Clock change detected. Flushing caches. Jan 13 23:42:04.831239 systemd-journald[1459]: Time jumped backwards, rotating. Jan 13 23:42:04.831037 systemd[2236]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 23:42:04.831231 systemd[2236]: Reached target sockets.target - Sockets. Jan 13 23:42:04.845320 systemd[2236]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 13 23:42:04.845616 systemd[2236]: Reached target basic.target - Basic System. Jan 13 23:42:04.845745 systemd[2236]: Reached target default.target - Main User Target. Jan 13 23:42:04.845807 systemd[2236]: Startup finished in 396ms. Jan 13 23:42:04.846502 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 23:42:04.856577 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 23:42:05.045397 kubelet[2220]: E0113 23:42:05.045230 2220 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:42:05.050041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:42:05.050402 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:42:05.051449 systemd[1]: kubelet.service: Consumed 1.474s CPU time, 255M memory peak. Jan 13 23:42:05.118358 systemd[1]: Started sshd@1-172.31.22.81:22-20.161.92.111:39082.service - OpenSSH per-connection server daemon (20.161.92.111:39082). Jan 13 23:42:05.577180 sshd[2253]: Accepted publickey for core from 20.161.92.111 port 39082 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:05.579809 sshd-session[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:05.589078 systemd-logind[1947]: New session 3 of user core. Jan 13 23:42:05.598510 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 23:42:05.817746 sshd[2257]: Connection closed by 20.161.92.111 port 39082 Jan 13 23:42:05.818649 sshd-session[2253]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:05.826740 systemd[1]: sshd@1-172.31.22.81:22-20.161.92.111:39082.service: Deactivated successfully. Jan 13 23:42:05.830057 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 23:42:05.835071 systemd-logind[1947]: Session 3 logged out. Waiting for processes to exit. Jan 13 23:42:05.837475 systemd-logind[1947]: Removed session 3. Jan 13 23:42:05.914708 systemd[1]: Started sshd@2-172.31.22.81:22-20.161.92.111:39092.service - OpenSSH per-connection server daemon (20.161.92.111:39092). Jan 13 23:42:06.372894 sshd[2263]: Accepted publickey for core from 20.161.92.111 port 39092 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:06.375469 sshd-session[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:06.384220 systemd-logind[1947]: New session 4 of user core. Jan 13 23:42:06.391458 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 23:42:06.603743 sshd[2267]: Connection closed by 20.161.92.111 port 39092 Jan 13 23:42:06.604524 sshd-session[2263]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:06.614231 systemd-logind[1947]: Session 4 logged out. Waiting for processes to exit. Jan 13 23:42:06.614722 systemd[1]: sshd@2-172.31.22.81:22-20.161.92.111:39092.service: Deactivated successfully. Jan 13 23:42:06.620263 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 23:42:06.624757 systemd-logind[1947]: Removed session 4. Jan 13 23:42:06.693400 systemd[1]: Started sshd@3-172.31.22.81:22-20.161.92.111:39102.service - OpenSSH per-connection server daemon (20.161.92.111:39102). Jan 13 23:42:07.162077 sshd[2273]: Accepted publickey for core from 20.161.92.111 port 39102 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:07.164656 sshd-session[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:07.173751 systemd-logind[1947]: New session 5 of user core. Jan 13 23:42:07.178435 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 23:42:07.399674 sshd[2277]: Connection closed by 20.161.92.111 port 39102 Jan 13 23:42:07.400603 sshd-session[2273]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:07.409840 systemd-logind[1947]: Session 5 logged out. Waiting for processes to exit. Jan 13 23:42:07.410882 systemd[1]: sshd@3-172.31.22.81:22-20.161.92.111:39102.service: Deactivated successfully. Jan 13 23:42:07.415777 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 23:42:07.419814 systemd-logind[1947]: Removed session 5. Jan 13 23:42:07.494755 systemd[1]: Started sshd@4-172.31.22.81:22-20.161.92.111:39114.service - OpenSSH per-connection server daemon (20.161.92.111:39114). Jan 13 23:42:07.961227 sshd[2283]: Accepted publickey for core from 20.161.92.111 port 39114 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:07.963645 sshd-session[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:07.971896 systemd-logind[1947]: New session 6 of user core. Jan 13 23:42:07.984464 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 23:42:08.168869 sudo[2288]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 23:42:08.169568 sudo[2288]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:42:08.181623 sudo[2288]: pam_unix(sudo:session): session closed for user root Jan 13 23:42:08.259422 sshd[2287]: Connection closed by 20.161.92.111 port 39114 Jan 13 23:42:08.261505 sshd-session[2283]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:08.269720 systemd-logind[1947]: Session 6 logged out. Waiting for processes to exit. Jan 13 23:42:08.270027 systemd[1]: sshd@4-172.31.22.81:22-20.161.92.111:39114.service: Deactivated successfully. Jan 13 23:42:08.273576 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 23:42:08.278427 systemd-logind[1947]: Removed session 6. Jan 13 23:42:08.352799 systemd[1]: Started sshd@5-172.31.22.81:22-20.161.92.111:39118.service - OpenSSH per-connection server daemon (20.161.92.111:39118). Jan 13 23:42:08.814989 sshd[2295]: Accepted publickey for core from 20.161.92.111 port 39118 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:08.817592 sshd-session[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:08.827217 systemd-logind[1947]: New session 7 of user core. Jan 13 23:42:08.837424 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 23:42:08.980400 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 23:42:08.981021 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:42:08.985014 sudo[2301]: pam_unix(sudo:session): session closed for user root Jan 13 23:42:08.998022 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 23:42:08.999208 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:42:09.013176 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 23:42:09.073000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 13 23:42:09.075523 kernel: kauditd_printk_skb: 94 callbacks suppressed Jan 13 23:42:09.075625 kernel: audit: type=1305 audit(1768347729.073:238): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 13 23:42:09.073000 audit[2325]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcf969a40 a2=420 a3=0 items=0 ppid=2306 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:09.078890 augenrules[2325]: No rules Jan 13 23:42:09.084852 kernel: audit: type=1300 audit(1768347729.073:238): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcf969a40 a2=420 a3=0 items=0 ppid=2306 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:09.073000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 13 23:42:09.088219 kernel: audit: type=1327 audit(1768347729.073:238): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 13 23:42:09.088920 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 23:42:09.089655 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 23:42:09.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:09.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:09.096583 sudo[2300]: pam_unix(sudo:session): session closed for user root Jan 13 23:42:09.100752 kernel: audit: type=1130 audit(1768347729.089:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:09.100847 kernel: audit: type=1131 audit(1768347729.089:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:09.095000 audit[2300]: USER_END pid=2300 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:42:09.105831 kernel: audit: type=1106 audit(1768347729.095:241): pid=2300 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:42:09.105907 kernel: audit: type=1104 audit(1768347729.095:242): pid=2300 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:42:09.095000 audit[2300]: CRED_DISP pid=2300 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:42:09.175793 sshd[2299]: Connection closed by 20.161.92.111 port 39118 Jan 13 23:42:09.176654 sshd-session[2295]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:09.177000 audit[2295]: USER_END pid=2295 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:09.185272 systemd[1]: sshd@5-172.31.22.81:22-20.161.92.111:39118.service: Deactivated successfully. Jan 13 23:42:09.178000 audit[2295]: CRED_DISP pid=2295 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:09.190010 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 23:42:09.192251 kernel: audit: type=1106 audit(1768347729.177:243): pid=2295 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:09.192326 kernel: audit: type=1104 audit(1768347729.178:244): pid=2295 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:09.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.22.81:22-20.161.92.111:39118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:09.197543 kernel: audit: type=1131 audit(1768347729.183:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.22.81:22-20.161.92.111:39118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:09.197659 systemd-logind[1947]: Session 7 logged out. Waiting for processes to exit. Jan 13 23:42:09.200161 systemd-logind[1947]: Removed session 7. Jan 13 23:42:09.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.22.81:22-20.161.92.111:39130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:09.268621 systemd[1]: Started sshd@6-172.31.22.81:22-20.161.92.111:39130.service - OpenSSH per-connection server daemon (20.161.92.111:39130). Jan 13 23:42:09.731000 audit[2334]: USER_ACCT pid=2334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:09.732769 sshd[2334]: Accepted publickey for core from 20.161.92.111 port 39130 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:09.733000 audit[2334]: CRED_ACQ pid=2334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:09.733000 audit[2334]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd26a6460 a2=3 a3=0 items=0 ppid=1 pid=2334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:09.733000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:09.735897 sshd-session[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:09.745221 systemd-logind[1947]: New session 8 of user core. Jan 13 23:42:09.753461 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 23:42:09.759000 audit[2334]: USER_START pid=2334 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:09.762000 audit[2338]: CRED_ACQ pid=2338 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:09.900000 audit[2339]: USER_ACCT pid=2339 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:42:09.901725 sudo[2339]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 23:42:09.900000 audit[2339]: CRED_REFR pid=2339 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:42:09.902622 sudo[2339]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:42:09.901000 audit[2339]: USER_START pid=2339 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:42:11.106377 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 23:42:11.121635 (dockerd)[2357]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 23:42:12.167043 dockerd[2357]: time="2026-01-13T23:42:12.166945348Z" level=info msg="Starting up" Jan 13 23:42:12.168721 dockerd[2357]: time="2026-01-13T23:42:12.168660352Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 13 23:42:12.189518 dockerd[2357]: time="2026-01-13T23:42:12.189433828Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 13 23:42:12.216613 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3639769531-merged.mount: Deactivated successfully. Jan 13 23:42:12.259648 dockerd[2357]: time="2026-01-13T23:42:12.259461856Z" level=info msg="Loading containers: start." Jan 13 23:42:12.307180 kernel: Initializing XFRM netlink socket Jan 13 23:42:12.434000 audit[2406]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.434000 audit[2406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc4ec1160 a2=0 a3=0 items=0 ppid=2357 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.434000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 13 23:42:12.438000 audit[2408]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2408 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.438000 audit[2408]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc67adc30 a2=0 a3=0 items=0 ppid=2357 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.438000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 13 23:42:12.443000 audit[2410]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2410 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.443000 audit[2410]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2f16410 a2=0 a3=0 items=0 ppid=2357 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.443000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 13 23:42:12.448000 audit[2412]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.448000 audit[2412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc3227e0 a2=0 a3=0 items=0 ppid=2357 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.448000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 13 23:42:12.452000 audit[2414]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.452000 audit[2414]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff9b2cc20 a2=0 a3=0 items=0 ppid=2357 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.452000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 13 23:42:12.457000 audit[2416]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.457000 audit[2416]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffa809e50 a2=0 a3=0 items=0 ppid=2357 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.457000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 13 23:42:12.461000 audit[2418]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.461000 audit[2418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff61fc590 a2=0 a3=0 items=0 ppid=2357 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.461000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 13 23:42:12.465000 audit[2420]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.465000 audit[2420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffce5adcb0 a2=0 a3=0 items=0 ppid=2357 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.465000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 13 23:42:12.545000 audit[2423]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.545000 audit[2423]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=fffffca61820 a2=0 a3=0 items=0 ppid=2357 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.545000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 13 23:42:12.550000 audit[2425]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.550000 audit[2425]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffbccc3b0 a2=0 a3=0 items=0 ppid=2357 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.550000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 13 23:42:12.553000 audit[2427]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2427 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.553000 audit[2427]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffce5af960 a2=0 a3=0 items=0 ppid=2357 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.553000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 13 23:42:12.558000 audit[2429]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2429 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.558000 audit[2429]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc1f244c0 a2=0 a3=0 items=0 ppid=2357 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.558000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 13 23:42:12.562000 audit[2431]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.562000 audit[2431]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffcc4845b0 a2=0 a3=0 items=0 ppid=2357 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.562000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 13 23:42:12.635000 audit[2461]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:12.635000 audit[2461]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffcf6c0f10 a2=0 a3=0 items=0 ppid=2357 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.635000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 13 23:42:12.640000 audit[2463]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:12.640000 audit[2463]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffffef45800 a2=0 a3=0 items=0 ppid=2357 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.640000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 13 23:42:12.644000 audit[2465]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:12.644000 audit[2465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7c6ca40 a2=0 a3=0 items=0 ppid=2357 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.644000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 13 23:42:12.649000 audit[2467]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2467 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:12.649000 audit[2467]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff3c0f4b0 a2=0 a3=0 items=0 ppid=2357 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.649000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 13 23:42:12.653000 audit[2469]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:12.653000 audit[2469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff3bd5b40 a2=0 a3=0 items=0 ppid=2357 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.653000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 13 23:42:12.657000 audit[2471]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:12.657000 audit[2471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdee2b1c0 a2=0 a3=0 items=0 ppid=2357 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.657000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 13 23:42:12.661000 audit[2473]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:12.661000 audit[2473]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffec08d000 a2=0 a3=0 items=0 ppid=2357 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.661000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 13 23:42:12.666000 audit[2475]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:12.666000 audit[2475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffce57c7b0 a2=0 a3=0 items=0 ppid=2357 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.666000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 13 23:42:12.671000 audit[2477]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:12.671000 audit[2477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffecc05700 a2=0 a3=0 items=0 ppid=2357 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.671000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 13 23:42:12.676000 audit[2479]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:12.676000 audit[2479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff52aef90 a2=0 a3=0 items=0 ppid=2357 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.676000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 13 23:42:12.681000 audit[2481]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:12.681000 audit[2481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffc31dd200 a2=0 a3=0 items=0 ppid=2357 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.681000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 13 23:42:12.685000 audit[2483]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:12.685000 audit[2483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc318ee80 a2=0 a3=0 items=0 ppid=2357 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.685000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 13 23:42:12.691000 audit[2485]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:12.691000 audit[2485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffdf3fd840 a2=0 a3=0 items=0 ppid=2357 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.691000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 13 23:42:12.702000 audit[2490]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.702000 audit[2490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcde62f70 a2=0 a3=0 items=0 ppid=2357 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.702000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 13 23:42:12.706000 audit[2492]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.706000 audit[2492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd4b61cb0 a2=0 a3=0 items=0 ppid=2357 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.706000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 13 23:42:12.710000 audit[2494]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.710000 audit[2494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd0be6d40 a2=0 a3=0 items=0 ppid=2357 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.710000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 13 23:42:12.715000 audit[2496]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:12.715000 audit[2496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc5fa23b0 a2=0 a3=0 items=0 ppid=2357 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.715000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 13 23:42:12.719000 audit[2498]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:12.719000 audit[2498]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffffc5500b0 a2=0 a3=0 items=0 ppid=2357 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.719000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 13 23:42:12.724000 audit[2500]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:12.724000 audit[2500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff6beec90 a2=0 a3=0 items=0 ppid=2357 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.724000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 13 23:42:12.738334 (udev-worker)[2379]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:42:12.752000 audit[2505]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.752000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffd5827780 a2=0 a3=0 items=0 ppid=2357 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.752000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 13 23:42:12.761000 audit[2507]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.761000 audit[2507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffff3eb4060 a2=0 a3=0 items=0 ppid=2357 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.761000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 13 23:42:12.781000 audit[2515]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.781000 audit[2515]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffd9a160f0 a2=0 a3=0 items=0 ppid=2357 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.781000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 13 23:42:12.801000 audit[2521]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.801000 audit[2521]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd7aab840 a2=0 a3=0 items=0 ppid=2357 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.801000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 13 23:42:12.806000 audit[2523]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.806000 audit[2523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffcbccb0c0 a2=0 a3=0 items=0 ppid=2357 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.806000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 13 23:42:12.810000 audit[2525]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.810000 audit[2525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc56cffe0 a2=0 a3=0 items=0 ppid=2357 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.810000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 13 23:42:12.814000 audit[2527]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.814000 audit[2527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd7950b50 a2=0 a3=0 items=0 ppid=2357 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.814000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 13 23:42:12.819000 audit[2529]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:12.819000 audit[2529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff339c6e0 a2=0 a3=0 items=0 ppid=2357 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.819000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 13 23:42:12.822245 systemd-networkd[1570]: docker0: Link UP Jan 13 23:42:12.828463 dockerd[2357]: time="2026-01-13T23:42:12.828206755Z" level=info msg="Loading containers: done." Jan 13 23:42:12.855464 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck12320625-merged.mount: Deactivated successfully. Jan 13 23:42:12.866189 dockerd[2357]: time="2026-01-13T23:42:12.866091848Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 23:42:12.866388 dockerd[2357]: time="2026-01-13T23:42:12.866230808Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 13 23:42:12.866551 dockerd[2357]: time="2026-01-13T23:42:12.866517500Z" level=info msg="Initializing buildkit" Jan 13 23:42:12.905772 dockerd[2357]: time="2026-01-13T23:42:12.905705324Z" level=info msg="Completed buildkit initialization" Jan 13 23:42:12.919709 dockerd[2357]: time="2026-01-13T23:42:12.919606412Z" level=info msg="Daemon has completed initialization" Jan 13 23:42:12.920291 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 23:42:12.920552 dockerd[2357]: time="2026-01-13T23:42:12.920048588Z" level=info msg="API listen on /run/docker.sock" Jan 13 23:42:12.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:14.966738 containerd[1990]: time="2026-01-13T23:42:14.966631222Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 13 23:42:15.134112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 23:42:15.137041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:42:15.542008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:42:15.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:15.544699 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 13 23:42:15.544809 kernel: audit: type=1130 audit(1768347735.543:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:15.567721 (kubelet)[2575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:42:15.667832 kubelet[2575]: E0113 23:42:15.667746 2575 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:42:15.676038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:42:15.676868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:42:15.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:42:15.678449 systemd[1]: kubelet.service: Consumed 344ms CPU time, 106M memory peak. Jan 13 23:42:15.685213 kernel: audit: type=1131 audit(1768347735.678:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:42:15.906407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2665386708.mount: Deactivated successfully. Jan 13 23:42:17.220167 containerd[1990]: time="2026-01-13T23:42:17.219476097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:17.222464 containerd[1990]: time="2026-01-13T23:42:17.222394029Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=24845792" Jan 13 23:42:17.225162 containerd[1990]: time="2026-01-13T23:42:17.225091701Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:17.230483 containerd[1990]: time="2026-01-13T23:42:17.230432997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:17.232359 containerd[1990]: time="2026-01-13T23:42:17.232293477Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.264757083s" Jan 13 23:42:17.232481 containerd[1990]: time="2026-01-13T23:42:17.232359717Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 13 23:42:17.234306 containerd[1990]: time="2026-01-13T23:42:17.234259701Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 13 23:42:18.795342 containerd[1990]: time="2026-01-13T23:42:18.795263593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:18.797194 containerd[1990]: time="2026-01-13T23:42:18.797096113Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22613932" Jan 13 23:42:18.800092 containerd[1990]: time="2026-01-13T23:42:18.799060657Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:18.804610 containerd[1990]: time="2026-01-13T23:42:18.804534145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:18.806999 containerd[1990]: time="2026-01-13T23:42:18.806914729Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.572440384s" Jan 13 23:42:18.807213 containerd[1990]: time="2026-01-13T23:42:18.807182137Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 13 23:42:18.808742 containerd[1990]: time="2026-01-13T23:42:18.808700245Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 13 23:42:20.090673 containerd[1990]: time="2026-01-13T23:42:20.090489491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:20.093101 containerd[1990]: time="2026-01-13T23:42:20.093057191Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17608611" Jan 13 23:42:20.094437 containerd[1990]: time="2026-01-13T23:42:20.094391951Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:20.099347 containerd[1990]: time="2026-01-13T23:42:20.099294323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:20.101444 containerd[1990]: time="2026-01-13T23:42:20.101392403Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.292443602s" Jan 13 23:42:20.101644 containerd[1990]: time="2026-01-13T23:42:20.101610695Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 13 23:42:20.102782 containerd[1990]: time="2026-01-13T23:42:20.102727055Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 13 23:42:21.383380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1218639230.mount: Deactivated successfully. Jan 13 23:42:21.956952 containerd[1990]: time="2026-01-13T23:42:21.956892569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:21.957819 containerd[1990]: time="2026-01-13T23:42:21.957763181Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=0" Jan 13 23:42:21.959172 containerd[1990]: time="2026-01-13T23:42:21.959093465Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:21.962535 containerd[1990]: time="2026-01-13T23:42:21.962464157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:21.963813 containerd[1990]: time="2026-01-13T23:42:21.963754301Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.86096595s" Jan 13 23:42:21.963920 containerd[1990]: time="2026-01-13T23:42:21.963810137Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 13 23:42:21.964509 containerd[1990]: time="2026-01-13T23:42:21.964440977Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 13 23:42:22.791350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339251619.mount: Deactivated successfully. Jan 13 23:42:24.053183 containerd[1990]: time="2026-01-13T23:42:24.052839447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:24.056850 containerd[1990]: time="2026-01-13T23:42:24.056752599Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956379" Jan 13 23:42:24.059178 containerd[1990]: time="2026-01-13T23:42:24.059074359Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:24.069164 containerd[1990]: time="2026-01-13T23:42:24.068076687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:24.070474 containerd[1990]: time="2026-01-13T23:42:24.070385439Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.105882134s" Jan 13 23:42:24.070707 containerd[1990]: time="2026-01-13T23:42:24.070665075Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 13 23:42:24.071716 containerd[1990]: time="2026-01-13T23:42:24.071639427Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 23:42:24.563950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1475710531.mount: Deactivated successfully. Jan 13 23:42:24.579611 containerd[1990]: time="2026-01-13T23:42:24.579556050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:42:24.582470 containerd[1990]: time="2026-01-13T23:42:24.582366702Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 13 23:42:24.585494 containerd[1990]: time="2026-01-13T23:42:24.585432414Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:42:24.591793 containerd[1990]: time="2026-01-13T23:42:24.591709314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:42:24.593731 containerd[1990]: time="2026-01-13T23:42:24.593489190Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 521.782635ms" Jan 13 23:42:24.593731 containerd[1990]: time="2026-01-13T23:42:24.593546874Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 13 23:42:24.594660 containerd[1990]: time="2026-01-13T23:42:24.594159270Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 13 23:42:25.190342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110513158.mount: Deactivated successfully. Jan 13 23:42:25.884391 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 23:42:25.887631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:42:26.383493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:42:26.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:26.391156 kernel: audit: type=1130 audit(1768347746.383:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:26.398231 (kubelet)[2772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:42:26.483207 kubelet[2772]: E0113 23:42:26.483099 2772 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:42:26.488120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:42:26.488577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:42:26.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:42:26.489752 systemd[1]: kubelet.service: Consumed 324ms CPU time, 106.8M memory peak. Jan 13 23:42:26.495220 kernel: audit: type=1131 audit(1768347746.489:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:42:27.686645 containerd[1990]: time="2026-01-13T23:42:27.686564625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:27.689858 containerd[1990]: time="2026-01-13T23:42:27.689765733Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=56456774" Jan 13 23:42:27.690917 containerd[1990]: time="2026-01-13T23:42:27.690855417Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:27.697414 containerd[1990]: time="2026-01-13T23:42:27.696597489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:27.699613 containerd[1990]: time="2026-01-13T23:42:27.699369657Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.105155859s" Jan 13 23:42:27.699613 containerd[1990]: time="2026-01-13T23:42:27.699439185Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 13 23:42:29.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:29.559645 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 23:42:29.566169 kernel: audit: type=1131 audit(1768347749.560:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:29.575000 audit: BPF prog-id=66 op=UNLOAD Jan 13 23:42:29.578195 kernel: audit: type=1334 audit(1768347749.575:301): prog-id=66 op=UNLOAD Jan 13 23:42:35.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:35.382283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:42:35.382714 systemd[1]: kubelet.service: Consumed 324ms CPU time, 106.8M memory peak. Jan 13 23:42:35.387312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:42:35.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:35.394302 kernel: audit: type=1130 audit(1768347755.382:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:35.394428 kernel: audit: type=1131 audit(1768347755.382:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:35.449884 systemd[1]: Reload requested from client PID 2811 ('systemctl') (unit session-8.scope)... Jan 13 23:42:35.450295 systemd[1]: Reloading... Jan 13 23:42:35.667178 zram_generator::config[2867]: No configuration found. Jan 13 23:42:36.141856 systemd[1]: Reloading finished in 690 ms. Jan 13 23:42:36.201437 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 23:42:36.202547 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 23:42:36.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:42:36.204283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:42:36.212200 kernel: audit: type=1130 audit(1768347756.204:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:42:36.212378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:42:36.215000 audit: BPF prog-id=70 op=LOAD Jan 13 23:42:36.217000 audit: BPF prog-id=52 op=UNLOAD Jan 13 23:42:36.220867 kernel: audit: type=1334 audit(1768347756.215:305): prog-id=70 op=LOAD Jan 13 23:42:36.220986 kernel: audit: type=1334 audit(1768347756.217:306): prog-id=52 op=UNLOAD Jan 13 23:42:36.217000 audit: BPF prog-id=71 op=LOAD Jan 13 23:42:36.224772 kernel: audit: type=1334 audit(1768347756.217:307): prog-id=71 op=LOAD Jan 13 23:42:36.217000 audit: BPF prog-id=72 op=LOAD Jan 13 23:42:36.227156 kernel: audit: type=1334 audit(1768347756.217:308): prog-id=72 op=LOAD Jan 13 23:42:36.217000 audit: BPF prog-id=53 op=UNLOAD Jan 13 23:42:36.228783 kernel: audit: type=1334 audit(1768347756.217:309): prog-id=53 op=UNLOAD Jan 13 23:42:36.228838 kernel: audit: type=1334 audit(1768347756.217:310): prog-id=54 op=UNLOAD Jan 13 23:42:36.217000 audit: BPF prog-id=54 op=UNLOAD Jan 13 23:42:36.231165 kernel: audit: type=1334 audit(1768347756.222:311): prog-id=73 op=LOAD Jan 13 23:42:36.222000 audit: BPF prog-id=73 op=LOAD Jan 13 23:42:36.222000 audit: BPF prog-id=63 op=UNLOAD Jan 13 23:42:36.222000 audit: BPF prog-id=74 op=LOAD Jan 13 23:42:36.222000 audit: BPF prog-id=75 op=LOAD Jan 13 23:42:36.222000 audit: BPF prog-id=64 op=UNLOAD Jan 13 23:42:36.222000 audit: BPF prog-id=65 op=UNLOAD Jan 13 23:42:36.233000 audit: BPF prog-id=76 op=LOAD Jan 13 23:42:36.234000 audit: BPF prog-id=51 op=UNLOAD Jan 13 23:42:36.235000 audit: BPF prog-id=77 op=LOAD Jan 13 23:42:36.235000 audit: BPF prog-id=55 op=UNLOAD Jan 13 23:42:36.235000 audit: BPF prog-id=78 op=LOAD Jan 13 23:42:36.235000 audit: BPF prog-id=79 op=LOAD Jan 13 23:42:36.235000 audit: BPF prog-id=56 op=UNLOAD Jan 13 23:42:36.235000 audit: BPF prog-id=57 op=UNLOAD Jan 13 23:42:36.236000 audit: BPF prog-id=80 op=LOAD Jan 13 23:42:36.236000 audit: BPF prog-id=60 op=UNLOAD Jan 13 23:42:36.237000 audit: BPF prog-id=81 op=LOAD Jan 13 23:42:36.237000 audit: BPF prog-id=82 op=LOAD Jan 13 23:42:36.237000 audit: BPF prog-id=61 op=UNLOAD Jan 13 23:42:36.237000 audit: BPF prog-id=62 op=UNLOAD Jan 13 23:42:36.238000 audit: BPF prog-id=83 op=LOAD Jan 13 23:42:36.238000 audit: BPF prog-id=84 op=LOAD Jan 13 23:42:36.238000 audit: BPF prog-id=58 op=UNLOAD Jan 13 23:42:36.238000 audit: BPF prog-id=59 op=UNLOAD Jan 13 23:42:36.239000 audit: BPF prog-id=85 op=LOAD Jan 13 23:42:36.239000 audit: BPF prog-id=69 op=UNLOAD Jan 13 23:42:36.240000 audit: BPF prog-id=86 op=LOAD Jan 13 23:42:36.240000 audit: BPF prog-id=48 op=UNLOAD Jan 13 23:42:36.240000 audit: BPF prog-id=87 op=LOAD Jan 13 23:42:36.240000 audit: BPF prog-id=88 op=LOAD Jan 13 23:42:36.240000 audit: BPF prog-id=49 op=UNLOAD Jan 13 23:42:36.240000 audit: BPF prog-id=50 op=UNLOAD Jan 13 23:42:36.246000 audit: BPF prog-id=89 op=LOAD Jan 13 23:42:36.246000 audit: BPF prog-id=47 op=UNLOAD Jan 13 23:42:37.056640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:42:37.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:37.074672 (kubelet)[2918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 23:42:37.151514 kubelet[2918]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:42:37.153159 kubelet[2918]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 13 23:42:37.153159 kubelet[2918]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:42:37.153159 kubelet[2918]: I0113 23:42:37.152183 2918 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 23:42:37.883377 kubelet[2918]: I0113 23:42:37.883324 2918 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 13 23:42:37.883551 kubelet[2918]: I0113 23:42:37.883533 2918 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 23:42:37.884111 kubelet[2918]: I0113 23:42:37.884089 2918 server.go:954] "Client rotation is on, will bootstrap in background" Jan 13 23:42:37.936442 kubelet[2918]: E0113 23:42:37.936373 2918 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.22.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.81:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:42:37.939739 kubelet[2918]: I0113 23:42:37.939637 2918 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 23:42:37.951863 kubelet[2918]: I0113 23:42:37.951821 2918 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 13 23:42:37.960729 kubelet[2918]: I0113 23:42:37.960680 2918 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 23:42:37.961478 kubelet[2918]: I0113 23:42:37.961407 2918 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 23:42:37.962121 kubelet[2918]: I0113 23:42:37.961632 2918 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-81","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 23:42:37.962624 kubelet[2918]: I0113 23:42:37.962592 2918 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 23:42:37.962731 kubelet[2918]: I0113 23:42:37.962714 2918 container_manager_linux.go:304] "Creating device plugin manager" Jan 13 23:42:37.963215 kubelet[2918]: I0113 23:42:37.963183 2918 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:42:37.971201 kubelet[2918]: I0113 23:42:37.971110 2918 kubelet.go:446] "Attempting to sync node with API server" Jan 13 23:42:37.971201 kubelet[2918]: I0113 23:42:37.971205 2918 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 23:42:37.971405 kubelet[2918]: I0113 23:42:37.971255 2918 kubelet.go:352] "Adding apiserver pod source" Jan 13 23:42:37.971405 kubelet[2918]: I0113 23:42:37.971276 2918 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 23:42:37.978110 kubelet[2918]: I0113 23:42:37.978051 2918 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 13 23:42:37.978110 kubelet[2918]: I0113 23:42:37.979118 2918 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 23:42:37.978110 kubelet[2918]: W0113 23:42:37.979389 2918 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 23:42:37.983565 kubelet[2918]: I0113 23:42:37.983497 2918 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 13 23:42:37.983565 kubelet[2918]: I0113 23:42:37.983564 2918 server.go:1287] "Started kubelet" Jan 13 23:42:37.983884 kubelet[2918]: W0113 23:42:37.983804 2918 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.81:6443: connect: connection refused Jan 13 23:42:37.983944 kubelet[2918]: E0113 23:42:37.983917 2918 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.81:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:42:37.984109 kubelet[2918]: W0113 23:42:37.984055 2918 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-81&limit=500&resourceVersion=0": dial tcp 172.31.22.81:6443: connect: connection refused Jan 13 23:42:37.984218 kubelet[2918]: E0113 23:42:37.984122 2918 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-81&limit=500&resourceVersion=0\": dial tcp 172.31.22.81:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:42:37.992603 kubelet[2918]: E0113 23:42:37.992106 2918 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.81:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.81:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-81.188a6eeab1f78a60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-81,UID:ip-172-31-22-81,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-81,},FirstTimestamp:2026-01-13 23:42:37.983533664 +0000 UTC m=+0.901543781,LastTimestamp:2026-01-13 23:42:37.983533664 +0000 UTC m=+0.901543781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-81,}" Jan 13 23:42:37.995119 kubelet[2918]: I0113 23:42:37.995059 2918 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 23:42:37.997213 kubelet[2918]: I0113 23:42:37.997111 2918 server.go:479] "Adding debug handlers to kubelet server" Jan 13 23:42:37.997449 kubelet[2918]: I0113 23:42:37.997400 2918 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 23:42:37.999270 kubelet[2918]: I0113 23:42:37.999189 2918 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 23:42:37.999749 kubelet[2918]: I0113 23:42:37.999716 2918 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 23:42:38.001445 kubelet[2918]: I0113 23:42:38.001398 2918 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 23:42:38.003000 audit[2930]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:38.003000 audit[2930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc9bcdbc0 a2=0 a3=0 items=0 ppid=2918 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:38.003000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 13 23:42:38.007000 audit[2931]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:38.007000 audit[2931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd35f76b0 a2=0 a3=0 items=0 ppid=2918 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:38.007000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 13 23:42:38.008556 kubelet[2918]: I0113 23:42:38.007678 2918 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 13 23:42:38.008556 kubelet[2918]: E0113 23:42:38.008014 2918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-81\" not found" Jan 13 23:42:38.008697 kubelet[2918]: I0113 23:42:38.008667 2918 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 13 23:42:38.010188 kubelet[2918]: E0113 23:42:38.009878 2918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-81?timeout=10s\": dial tcp 172.31.22.81:6443: connect: connection refused" interval="200ms" Jan 13 23:42:38.010318 kubelet[2918]: I0113 23:42:38.009533 2918 reconciler.go:26] "Reconciler: start to sync state" Jan 13 23:42:38.011382 kubelet[2918]: I0113 23:42:38.011312 2918 factory.go:221] Registration of the systemd container factory successfully Jan 13 23:42:38.012160 kubelet[2918]: I0113 23:42:38.011518 2918 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 23:42:38.012676 kubelet[2918]: E0113 23:42:38.012637 2918 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 23:42:38.013544 kubelet[2918]: W0113 23:42:38.013422 2918 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.81:6443: connect: connection refused Jan 13 23:42:38.014183 kubelet[2918]: E0113 23:42:38.013580 2918 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.81:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:42:38.018472 kubelet[2918]: I0113 23:42:38.018382 2918 factory.go:221] Registration of the containerd container factory successfully Jan 13 23:42:38.019000 audit[2933]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:38.019000 audit[2933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd7730910 a2=0 a3=0 items=0 ppid=2918 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:38.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:42:38.036000 audit[2936]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:38.036000 audit[2936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffca4f1ed0 a2=0 a3=0 items=0 ppid=2918 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:38.036000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:42:38.046175 kubelet[2918]: I0113 23:42:38.045717 2918 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 13 23:42:38.046175 kubelet[2918]: I0113 23:42:38.045745 2918 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 13 23:42:38.046175 kubelet[2918]: I0113 23:42:38.045777 2918 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:42:38.048764 kubelet[2918]: I0113 23:42:38.048730 2918 policy_none.go:49] "None policy: Start" Jan 13 23:42:38.049038 kubelet[2918]: I0113 23:42:38.049015 2918 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 13 23:42:38.049203 kubelet[2918]: I0113 23:42:38.049180 2918 state_mem.go:35] "Initializing new in-memory state store" Jan 13 23:42:38.059000 audit[2940]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:38.059000 audit[2940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffd6944560 a2=0 a3=0 items=0 ppid=2918 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:38.059000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 13 23:42:38.061111 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 23:42:38.063264 kubelet[2918]: I0113 23:42:38.063220 2918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 23:42:38.066000 audit[2942]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:38.066000 audit[2942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff7811ea0 a2=0 a3=0 items=0 ppid=2918 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:38.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 13 23:42:38.068164 kubelet[2918]: I0113 23:42:38.067717 2918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 23:42:38.068164 kubelet[2918]: I0113 23:42:38.067770 2918 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 13 23:42:38.068164 kubelet[2918]: I0113 23:42:38.067805 2918 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 13 23:42:38.068164 kubelet[2918]: I0113 23:42:38.067821 2918 kubelet.go:2382] "Starting kubelet main sync loop" Jan 13 23:42:38.068164 kubelet[2918]: E0113 23:42:38.067902 2918 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 23:42:38.068000 audit[2943]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:38.068000 audit[2943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee9e0350 a2=0 a3=0 items=0 ppid=2918 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:38.068000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 13 23:42:38.071551 kubelet[2918]: W0113 23:42:38.071463 2918 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.81:6443: connect: connection refused Jan 13 23:42:38.072769 kubelet[2918]: E0113 23:42:38.072717 2918 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.81:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:42:38.074000 audit[2947]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:38.074000 audit[2947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc9b1c1b0 a2=0 a3=0 items=0 ppid=2918 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:38.074000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 13 23:42:38.077000 audit[2948]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:38.077000 audit[2948]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffedef0500 a2=0 a3=0 items=0 ppid=2918 pid=2948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:38.077000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 13 23:42:38.081000 audit[2949]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2949 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:38.081000 audit[2949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1785ca0 a2=0 a3=0 items=0 ppid=2918 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:38.081000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 13 23:42:38.085000 audit[2950]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:38.085000 audit[2950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd40dc0f0 a2=0 a3=0 items=0 ppid=2918 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:38.085000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 13 23:42:38.088000 audit[2951]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2951 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:38.088000 audit[2951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe1568730 a2=0 a3=0 items=0 ppid=2918 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:38.088000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 13 23:42:38.092285 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 23:42:38.104072 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 23:42:38.108179 kubelet[2918]: E0113 23:42:38.108097 2918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-81\" not found" Jan 13 23:42:38.113969 kubelet[2918]: I0113 23:42:38.113925 2918 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 23:42:38.114622 kubelet[2918]: I0113 23:42:38.114540 2918 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 23:42:38.114622 kubelet[2918]: I0113 23:42:38.114576 2918 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 23:42:38.115795 kubelet[2918]: I0113 23:42:38.114931 2918 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 23:42:38.118777 kubelet[2918]: E0113 23:42:38.118741 2918 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 13 23:42:38.118951 kubelet[2918]: E0113 23:42:38.118929 2918 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-81\" not found" Jan 13 23:42:38.192570 systemd[1]: Created slice kubepods-burstable-pod19661e3783ae6a5498e7ab32f695904d.slice - libcontainer container kubepods-burstable-pod19661e3783ae6a5498e7ab32f695904d.slice. Jan 13 23:42:38.211214 kubelet[2918]: E0113 23:42:38.209942 2918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-81\" not found" node="ip-172-31-22-81" Jan 13 23:42:38.211991 kubelet[2918]: E0113 23:42:38.210642 2918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-81?timeout=10s\": dial tcp 172.31.22.81:6443: connect: connection refused" interval="400ms" Jan 13 23:42:38.211991 kubelet[2918]: I0113 23:42:38.210901 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f36ea20b10c19ade6e0df918b046067-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-81\" (UID: \"4f36ea20b10c19ade6e0df918b046067\") " pod="kube-system/kube-scheduler-ip-172-31-22-81" Jan 13 23:42:38.211991 kubelet[2918]: I0113 23:42:38.211894 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19661e3783ae6a5498e7ab32f695904d-ca-certs\") pod \"kube-apiserver-ip-172-31-22-81\" (UID: \"19661e3783ae6a5498e7ab32f695904d\") " pod="kube-system/kube-apiserver-ip-172-31-22-81" Jan 13 23:42:38.211991 kubelet[2918]: I0113 23:42:38.211948 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19661e3783ae6a5498e7ab32f695904d-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-81\" (UID: \"19661e3783ae6a5498e7ab32f695904d\") " pod="kube-system/kube-apiserver-ip-172-31-22-81" Jan 13 23:42:38.212358 kubelet[2918]: I0113 23:42:38.211995 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19661e3783ae6a5498e7ab32f695904d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-81\" (UID: \"19661e3783ae6a5498e7ab32f695904d\") " pod="kube-system/kube-apiserver-ip-172-31-22-81" Jan 13 23:42:38.212358 kubelet[2918]: I0113 23:42:38.212035 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/77dae837c76ee31eccf813da0ad8d8f1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-81\" (UID: \"77dae837c76ee31eccf813da0ad8d8f1\") " pod="kube-system/kube-controller-manager-ip-172-31-22-81" Jan 13 23:42:38.212358 kubelet[2918]: I0113 23:42:38.212070 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77dae837c76ee31eccf813da0ad8d8f1-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-81\" (UID: \"77dae837c76ee31eccf813da0ad8d8f1\") " pod="kube-system/kube-controller-manager-ip-172-31-22-81" Jan 13 23:42:38.212358 kubelet[2918]: I0113 23:42:38.212107 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77dae837c76ee31eccf813da0ad8d8f1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-81\" (UID: \"77dae837c76ee31eccf813da0ad8d8f1\") " pod="kube-system/kube-controller-manager-ip-172-31-22-81" Jan 13 23:42:38.213338 kubelet[2918]: I0113 23:42:38.213242 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/77dae837c76ee31eccf813da0ad8d8f1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-81\" (UID: \"77dae837c76ee31eccf813da0ad8d8f1\") " pod="kube-system/kube-controller-manager-ip-172-31-22-81" Jan 13 23:42:38.213435 kubelet[2918]: I0113 23:42:38.213353 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77dae837c76ee31eccf813da0ad8d8f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-81\" (UID: \"77dae837c76ee31eccf813da0ad8d8f1\") " pod="kube-system/kube-controller-manager-ip-172-31-22-81" Jan 13 23:42:38.216693 systemd[1]: Created slice kubepods-burstable-pod77dae837c76ee31eccf813da0ad8d8f1.slice - libcontainer container kubepods-burstable-pod77dae837c76ee31eccf813da0ad8d8f1.slice. Jan 13 23:42:38.224182 kubelet[2918]: E0113 23:42:38.223006 2918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-81\" not found" node="ip-172-31-22-81" Jan 13 23:42:38.226043 kubelet[2918]: I0113 23:42:38.226010 2918 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-81" Jan 13 23:42:38.226987 kubelet[2918]: E0113 23:42:38.226945 2918 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.81:6443/api/v1/nodes\": dial tcp 172.31.22.81:6443: connect: connection refused" node="ip-172-31-22-81" Jan 13 23:42:38.229526 systemd[1]: Created slice kubepods-burstable-pod4f36ea20b10c19ade6e0df918b046067.slice - libcontainer container kubepods-burstable-pod4f36ea20b10c19ade6e0df918b046067.slice. Jan 13 23:42:38.233766 kubelet[2918]: E0113 23:42:38.233727 2918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-81\" not found" node="ip-172-31-22-81" Jan 13 23:42:38.430895 kubelet[2918]: I0113 23:42:38.430825 2918 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-81" Jan 13 23:42:38.431518 kubelet[2918]: E0113 23:42:38.431478 2918 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.81:6443/api/v1/nodes\": dial tcp 172.31.22.81:6443: connect: connection refused" node="ip-172-31-22-81" Jan 13 23:42:38.516828 containerd[1990]: time="2026-01-13T23:42:38.515989243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-81,Uid:19661e3783ae6a5498e7ab32f695904d,Namespace:kube-system,Attempt:0,}" Jan 13 23:42:38.525531 containerd[1990]: time="2026-01-13T23:42:38.525288751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-81,Uid:77dae837c76ee31eccf813da0ad8d8f1,Namespace:kube-system,Attempt:0,}" Jan 13 23:42:38.536104 containerd[1990]: time="2026-01-13T23:42:38.536054335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-81,Uid:4f36ea20b10c19ade6e0df918b046067,Namespace:kube-system,Attempt:0,}" Jan 13 23:42:38.612646 kubelet[2918]: E0113 23:42:38.612558 2918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-81?timeout=10s\": dial tcp 172.31.22.81:6443: connect: connection refused" interval="800ms" Jan 13 23:42:38.811520 containerd[1990]: time="2026-01-13T23:42:38.811099544Z" level=info msg="connecting to shim db74787b4434ffb6bbebab1f84eb7bf6091c640e9e042a64049aa07c99b9c093" address="unix:///run/containerd/s/7885cd1bf859e7b9d9e8d1d9c8877d188fdbbcde5bd0fb5e75e2c665705de612" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:42:38.834378 kubelet[2918]: I0113 23:42:38.834317 2918 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-81" Jan 13 23:42:38.848293 kubelet[2918]: E0113 23:42:38.835024 2918 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.81:6443/api/v1/nodes\": dial tcp 172.31.22.81:6443: connect: connection refused" node="ip-172-31-22-81" Jan 13 23:42:38.850185 kubelet[2918]: W0113 23:42:38.850056 2918 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.81:6443: connect: connection refused Jan 13 23:42:38.850456 kubelet[2918]: E0113 23:42:38.850393 2918 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.81:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:42:38.929171 containerd[1990]: time="2026-01-13T23:42:38.928706877Z" level=info msg="connecting to shim 4a534ff9cb1fe5e121e4965c693ee674868d5352c0cd56707177c822b55aa93e" address="unix:///run/containerd/s/a3fdfed39eb1a27873e657998e2eeea3e5ac1ff8e0e2eee5cd17ddf0cf13b294" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:42:38.952028 containerd[1990]: time="2026-01-13T23:42:38.949969869Z" level=info msg="connecting to shim df94e9e5d1a5318d45488a016147ed4f7f9aa83845193a34354fa2e7d0b8943c" address="unix:///run/containerd/s/a30b9eae1753983dbd6670409d3ffef0ea865265fe0d6793784eb73617bcfb13" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:42:38.951889 systemd[1]: Started cri-containerd-db74787b4434ffb6bbebab1f84eb7bf6091c640e9e042a64049aa07c99b9c093.scope - libcontainer container db74787b4434ffb6bbebab1f84eb7bf6091c640e9e042a64049aa07c99b9c093. Jan 13 23:42:39.017737 systemd[1]: Started cri-containerd-df94e9e5d1a5318d45488a016147ed4f7f9aa83845193a34354fa2e7d0b8943c.scope - libcontainer container df94e9e5d1a5318d45488a016147ed4f7f9aa83845193a34354fa2e7d0b8943c. Jan 13 23:42:39.018107 kubelet[2918]: W0113 23:42:39.017710 2918 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-81&limit=500&resourceVersion=0": dial tcp 172.31.22.81:6443: connect: connection refused Jan 13 23:42:39.018107 kubelet[2918]: E0113 23:42:39.017837 2918 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-81&limit=500&resourceVersion=0\": dial tcp 172.31.22.81:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:42:39.025000 audit: BPF prog-id=90 op=LOAD Jan 13 23:42:39.028000 audit: BPF prog-id=91 op=LOAD Jan 13 23:42:39.028000 audit[2971]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2960 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462373437383762343433346666623662626562616231663834656237 Jan 13 23:42:39.029000 audit: BPF prog-id=91 op=UNLOAD Jan 13 23:42:39.029000 audit[2971]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2960 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462373437383762343433346666623662626562616231663834656237 Jan 13 23:42:39.031000 audit: BPF prog-id=92 op=LOAD Jan 13 23:42:39.031000 audit[2971]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2960 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462373437383762343433346666623662626562616231663834656237 Jan 13 23:42:39.031000 audit: BPF prog-id=93 op=LOAD Jan 13 23:42:39.031000 audit[2971]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=2960 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462373437383762343433346666623662626562616231663834656237 Jan 13 23:42:39.031000 audit: BPF prog-id=93 op=UNLOAD Jan 13 23:42:39.031000 audit[2971]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2960 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462373437383762343433346666623662626562616231663834656237 Jan 13 23:42:39.031000 audit: BPF prog-id=92 op=UNLOAD Jan 13 23:42:39.031000 audit[2971]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2960 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462373437383762343433346666623662626562616231663834656237 Jan 13 23:42:39.031000 audit: BPF prog-id=94 op=LOAD Jan 13 23:42:39.031000 audit[2971]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=2960 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462373437383762343433346666623662626562616231663834656237 Jan 13 23:42:39.045511 systemd[1]: Started cri-containerd-4a534ff9cb1fe5e121e4965c693ee674868d5352c0cd56707177c822b55aa93e.scope - libcontainer container 4a534ff9cb1fe5e121e4965c693ee674868d5352c0cd56707177c822b55aa93e. Jan 13 23:42:39.061000 audit: BPF prog-id=95 op=LOAD Jan 13 23:42:39.063000 audit: BPF prog-id=96 op=LOAD Jan 13 23:42:39.063000 audit[3017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3000 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466393465396535643161353331386434353438386130313631343765 Jan 13 23:42:39.064000 audit: BPF prog-id=96 op=UNLOAD Jan 13 23:42:39.064000 audit[3017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3000 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466393465396535643161353331386434353438386130313631343765 Jan 13 23:42:39.064000 audit: BPF prog-id=97 op=LOAD Jan 13 23:42:39.064000 audit[3017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3000 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466393465396535643161353331386434353438386130313631343765 Jan 13 23:42:39.067000 audit: BPF prog-id=98 op=LOAD Jan 13 23:42:39.067000 audit[3017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3000 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466393465396535643161353331386434353438386130313631343765 Jan 13 23:42:39.068000 audit: BPF prog-id=98 op=UNLOAD Jan 13 23:42:39.068000 audit[3017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3000 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466393465396535643161353331386434353438386130313631343765 Jan 13 23:42:39.068000 audit: BPF prog-id=97 op=UNLOAD Jan 13 23:42:39.068000 audit[3017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3000 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466393465396535643161353331386434353438386130313631343765 Jan 13 23:42:39.068000 audit: BPF prog-id=99 op=LOAD Jan 13 23:42:39.068000 audit[3017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3000 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466393465396535643161353331386434353438386130313631343765 Jan 13 23:42:39.118000 audit: BPF prog-id=100 op=LOAD Jan 13 23:42:39.121000 audit: BPF prog-id=101 op=LOAD Jan 13 23:42:39.121000 audit[3034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2993 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461353334666639636231666535653132316534393635633639336565 Jan 13 23:42:39.123000 audit: BPF prog-id=101 op=UNLOAD Jan 13 23:42:39.123000 audit[3034]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461353334666639636231666535653132316534393635633639336565 Jan 13 23:42:39.123000 audit: BPF prog-id=102 op=LOAD Jan 13 23:42:39.123000 audit[3034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2993 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461353334666639636231666535653132316534393635633639336565 Jan 13 23:42:39.123000 audit: BPF prog-id=103 op=LOAD Jan 13 23:42:39.123000 audit[3034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2993 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461353334666639636231666535653132316534393635633639336565 Jan 13 23:42:39.124000 audit: BPF prog-id=103 op=UNLOAD Jan 13 23:42:39.124000 audit[3034]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461353334666639636231666535653132316534393635633639336565 Jan 13 23:42:39.124000 audit: BPF prog-id=102 op=UNLOAD Jan 13 23:42:39.124000 audit[3034]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461353334666639636231666535653132316534393635633639336565 Jan 13 23:42:39.124000 audit: BPF prog-id=104 op=LOAD Jan 13 23:42:39.124000 audit[3034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2993 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461353334666639636231666535653132316534393635633639336565 Jan 13 23:42:39.161922 containerd[1990]: time="2026-01-13T23:42:39.161811738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-81,Uid:19661e3783ae6a5498e7ab32f695904d,Namespace:kube-system,Attempt:0,} returns sandbox id \"db74787b4434ffb6bbebab1f84eb7bf6091c640e9e042a64049aa07c99b9c093\"" Jan 13 23:42:39.175492 containerd[1990]: time="2026-01-13T23:42:39.175436838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-81,Uid:4f36ea20b10c19ade6e0df918b046067,Namespace:kube-system,Attempt:0,} returns sandbox id \"df94e9e5d1a5318d45488a016147ed4f7f9aa83845193a34354fa2e7d0b8943c\"" Jan 13 23:42:39.178503 containerd[1990]: time="2026-01-13T23:42:39.178455306Z" level=info msg="CreateContainer within sandbox \"db74787b4434ffb6bbebab1f84eb7bf6091c640e9e042a64049aa07c99b9c093\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 23:42:39.193174 containerd[1990]: time="2026-01-13T23:42:39.192287094Z" level=info msg="CreateContainer within sandbox \"df94e9e5d1a5318d45488a016147ed4f7f9aa83845193a34354fa2e7d0b8943c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 23:42:39.200253 containerd[1990]: time="2026-01-13T23:42:39.200178870Z" level=info msg="Container 94c40d867522e86d37a848e4f66191324a38ed5bd16776e08f36e0f1984d303f: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:42:39.214986 containerd[1990]: time="2026-01-13T23:42:39.214913910Z" level=info msg="CreateContainer within sandbox \"db74787b4434ffb6bbebab1f84eb7bf6091c640e9e042a64049aa07c99b9c093\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"94c40d867522e86d37a848e4f66191324a38ed5bd16776e08f36e0f1984d303f\"" Jan 13 23:42:39.215684 containerd[1990]: time="2026-01-13T23:42:39.215614182Z" level=info msg="Container 3057bc6709d68a18b066fc2e3858f2a7488a6747542a8e0c5b33ee96d97c1913: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:42:39.217158 containerd[1990]: time="2026-01-13T23:42:39.216953718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-81,Uid:77dae837c76ee31eccf813da0ad8d8f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a534ff9cb1fe5e121e4965c693ee674868d5352c0cd56707177c822b55aa93e\"" Jan 13 23:42:39.217779 containerd[1990]: time="2026-01-13T23:42:39.217730334Z" level=info msg="StartContainer for \"94c40d867522e86d37a848e4f66191324a38ed5bd16776e08f36e0f1984d303f\"" Jan 13 23:42:39.222788 containerd[1990]: time="2026-01-13T23:42:39.222031050Z" level=info msg="connecting to shim 94c40d867522e86d37a848e4f66191324a38ed5bd16776e08f36e0f1984d303f" address="unix:///run/containerd/s/7885cd1bf859e7b9d9e8d1d9c8877d188fdbbcde5bd0fb5e75e2c665705de612" protocol=ttrpc version=3 Jan 13 23:42:39.224928 containerd[1990]: time="2026-01-13T23:42:39.224832462Z" level=info msg="CreateContainer within sandbox \"4a534ff9cb1fe5e121e4965c693ee674868d5352c0cd56707177c822b55aa93e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 23:42:39.233212 containerd[1990]: time="2026-01-13T23:42:39.233117094Z" level=info msg="CreateContainer within sandbox \"df94e9e5d1a5318d45488a016147ed4f7f9aa83845193a34354fa2e7d0b8943c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3057bc6709d68a18b066fc2e3858f2a7488a6747542a8e0c5b33ee96d97c1913\"" Jan 13 23:42:39.234336 containerd[1990]: time="2026-01-13T23:42:39.234274446Z" level=info msg="StartContainer for \"3057bc6709d68a18b066fc2e3858f2a7488a6747542a8e0c5b33ee96d97c1913\"" Jan 13 23:42:39.239163 containerd[1990]: time="2026-01-13T23:42:39.238209066Z" level=info msg="connecting to shim 3057bc6709d68a18b066fc2e3858f2a7488a6747542a8e0c5b33ee96d97c1913" address="unix:///run/containerd/s/a30b9eae1753983dbd6670409d3ffef0ea865265fe0d6793784eb73617bcfb13" protocol=ttrpc version=3 Jan 13 23:42:39.249590 containerd[1990]: time="2026-01-13T23:42:39.249506479Z" level=info msg="Container eca446fae923162551c4163c148dd68a37a60a1fab5a298db9905446e9dff934: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:42:39.268334 kubelet[2918]: W0113 23:42:39.268189 2918 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.81:6443: connect: connection refused Jan 13 23:42:39.268919 kubelet[2918]: E0113 23:42:39.268351 2918 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.81:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:42:39.268991 containerd[1990]: time="2026-01-13T23:42:39.268716403Z" level=info msg="CreateContainer within sandbox \"4a534ff9cb1fe5e121e4965c693ee674868d5352c0cd56707177c822b55aa93e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eca446fae923162551c4163c148dd68a37a60a1fab5a298db9905446e9dff934\"" Jan 13 23:42:39.269923 containerd[1990]: time="2026-01-13T23:42:39.269775883Z" level=info msg="StartContainer for \"eca446fae923162551c4163c148dd68a37a60a1fab5a298db9905446e9dff934\"" Jan 13 23:42:39.272503 systemd[1]: Started cri-containerd-94c40d867522e86d37a848e4f66191324a38ed5bd16776e08f36e0f1984d303f.scope - libcontainer container 94c40d867522e86d37a848e4f66191324a38ed5bd16776e08f36e0f1984d303f. Jan 13 23:42:39.276180 containerd[1990]: time="2026-01-13T23:42:39.275966563Z" level=info msg="connecting to shim eca446fae923162551c4163c148dd68a37a60a1fab5a298db9905446e9dff934" address="unix:///run/containerd/s/a3fdfed39eb1a27873e657998e2eeea3e5ac1ff8e0e2eee5cd17ddf0cf13b294" protocol=ttrpc version=3 Jan 13 23:42:39.295608 systemd[1]: Started cri-containerd-3057bc6709d68a18b066fc2e3858f2a7488a6747542a8e0c5b33ee96d97c1913.scope - libcontainer container 3057bc6709d68a18b066fc2e3858f2a7488a6747542a8e0c5b33ee96d97c1913. Jan 13 23:42:39.322348 kubelet[2918]: W0113 23:42:39.320610 2918 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.81:6443: connect: connection refused Jan 13 23:42:39.322738 kubelet[2918]: E0113 23:42:39.322641 2918 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.81:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:42:39.339000 audit: BPF prog-id=105 op=LOAD Jan 13 23:42:39.348450 systemd[1]: Started cri-containerd-eca446fae923162551c4163c148dd68a37a60a1fab5a298db9905446e9dff934.scope - libcontainer container eca446fae923162551c4163c148dd68a37a60a1fab5a298db9905446e9dff934. Jan 13 23:42:39.348000 audit: BPF prog-id=106 op=LOAD Jan 13 23:42:39.348000 audit[3089]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2960 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.348000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934633430643836373532326538366433376138343865346636363139 Jan 13 23:42:39.348000 audit: BPF prog-id=106 op=UNLOAD Jan 13 23:42:39.348000 audit[3089]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2960 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.348000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934633430643836373532326538366433376138343865346636363139 Jan 13 23:42:39.349000 audit: BPF prog-id=107 op=LOAD Jan 13 23:42:39.349000 audit[3089]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2960 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934633430643836373532326538366433376138343865346636363139 Jan 13 23:42:39.349000 audit: BPF prog-id=108 op=LOAD Jan 13 23:42:39.349000 audit[3089]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2960 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934633430643836373532326538366433376138343865346636363139 Jan 13 23:42:39.349000 audit: BPF prog-id=108 op=UNLOAD Jan 13 23:42:39.349000 audit[3089]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2960 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934633430643836373532326538366433376138343865346636363139 Jan 13 23:42:39.349000 audit: BPF prog-id=107 op=UNLOAD Jan 13 23:42:39.349000 audit[3089]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2960 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934633430643836373532326538366433376138343865346636363139 Jan 13 23:42:39.349000 audit: BPF prog-id=109 op=LOAD Jan 13 23:42:39.349000 audit[3089]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2960 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934633430643836373532326538366433376138343865346636363139 Jan 13 23:42:39.362000 audit: BPF prog-id=110 op=LOAD Jan 13 23:42:39.364000 audit: BPF prog-id=111 op=LOAD Jan 13 23:42:39.364000 audit[3097]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3000 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330353762633637303964363861313862303636666332653338353866 Jan 13 23:42:39.364000 audit: BPF prog-id=111 op=UNLOAD Jan 13 23:42:39.364000 audit[3097]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3000 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330353762633637303964363861313862303636666332653338353866 Jan 13 23:42:39.364000 audit: BPF prog-id=112 op=LOAD Jan 13 23:42:39.364000 audit[3097]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3000 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330353762633637303964363861313862303636666332653338353866 Jan 13 23:42:39.364000 audit: BPF prog-id=113 op=LOAD Jan 13 23:42:39.364000 audit[3097]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3000 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330353762633637303964363861313862303636666332653338353866 Jan 13 23:42:39.365000 audit: BPF prog-id=113 op=UNLOAD Jan 13 23:42:39.365000 audit[3097]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3000 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330353762633637303964363861313862303636666332653338353866 Jan 13 23:42:39.365000 audit: BPF prog-id=112 op=UNLOAD Jan 13 23:42:39.365000 audit[3097]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3000 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330353762633637303964363861313862303636666332653338353866 Jan 13 23:42:39.366000 audit: BPF prog-id=114 op=LOAD Jan 13 23:42:39.366000 audit[3097]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3000 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330353762633637303964363861313862303636666332653338353866 Jan 13 23:42:39.383699 kubelet[2918]: E0113 23:42:39.383523 2918 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.81:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.81:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-81.188a6eeab1f78a60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-81,UID:ip-172-31-22-81,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-81,},FirstTimestamp:2026-01-13 23:42:37.983533664 +0000 UTC m=+0.901543781,LastTimestamp:2026-01-13 23:42:37.983533664 +0000 UTC m=+0.901543781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-81,}" Jan 13 23:42:39.390000 audit: BPF prog-id=115 op=LOAD Jan 13 23:42:39.393000 audit: BPF prog-id=116 op=LOAD Jan 13 23:42:39.393000 audit[3118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2993 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563613434366661653932333136323535316334313633633134386464 Jan 13 23:42:39.393000 audit: BPF prog-id=116 op=UNLOAD Jan 13 23:42:39.393000 audit[3118]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563613434366661653932333136323535316334313633633134386464 Jan 13 23:42:39.394000 audit: BPF prog-id=117 op=LOAD Jan 13 23:42:39.394000 audit[3118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2993 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563613434366661653932333136323535316334313633633134386464 Jan 13 23:42:39.394000 audit: BPF prog-id=118 op=LOAD Jan 13 23:42:39.394000 audit[3118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2993 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563613434366661653932333136323535316334313633633134386464 Jan 13 23:42:39.395000 audit: BPF prog-id=118 op=UNLOAD Jan 13 23:42:39.395000 audit[3118]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563613434366661653932333136323535316334313633633134386464 Jan 13 23:42:39.395000 audit: BPF prog-id=117 op=UNLOAD Jan 13 23:42:39.395000 audit[3118]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563613434366661653932333136323535316334313633633134386464 Jan 13 23:42:39.395000 audit: BPF prog-id=119 op=LOAD Jan 13 23:42:39.395000 audit[3118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2993 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:39.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563613434366661653932333136323535316334313633633134386464 Jan 13 23:42:39.415763 kubelet[2918]: E0113 23:42:39.415668 2918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-81?timeout=10s\": dial tcp 172.31.22.81:6443: connect: connection refused" interval="1.6s" Jan 13 23:42:39.513046 containerd[1990]: time="2026-01-13T23:42:39.512866592Z" level=info msg="StartContainer for \"94c40d867522e86d37a848e4f66191324a38ed5bd16776e08f36e0f1984d303f\" returns successfully" Jan 13 23:42:39.515680 containerd[1990]: time="2026-01-13T23:42:39.515524868Z" level=info msg="StartContainer for \"3057bc6709d68a18b066fc2e3858f2a7488a6747542a8e0c5b33ee96d97c1913\" returns successfully" Jan 13 23:42:39.542512 containerd[1990]: time="2026-01-13T23:42:39.542451356Z" level=info msg="StartContainer for \"eca446fae923162551c4163c148dd68a37a60a1fab5a298db9905446e9dff934\" returns successfully" Jan 13 23:42:39.637396 kubelet[2918]: I0113 23:42:39.637340 2918 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-81" Jan 13 23:42:39.637888 kubelet[2918]: E0113 23:42:39.637834 2918 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.81:6443/api/v1/nodes\": dial tcp 172.31.22.81:6443: connect: connection refused" node="ip-172-31-22-81" Jan 13 23:42:40.104766 kubelet[2918]: E0113 23:42:40.104650 2918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-81\" not found" node="ip-172-31-22-81" Jan 13 23:42:40.112670 kubelet[2918]: E0113 23:42:40.112541 2918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-81\" not found" node="ip-172-31-22-81" Jan 13 23:42:40.118071 kubelet[2918]: E0113 23:42:40.118028 2918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-81\" not found" node="ip-172-31-22-81" Jan 13 23:42:41.120970 kubelet[2918]: E0113 23:42:41.120678 2918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-81\" not found" node="ip-172-31-22-81" Jan 13 23:42:41.120970 kubelet[2918]: E0113 23:42:41.120800 2918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-81\" not found" node="ip-172-31-22-81" Jan 13 23:42:41.242653 kubelet[2918]: I0113 23:42:41.242580 2918 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-81" Jan 13 23:42:42.123970 kubelet[2918]: E0113 23:42:42.123919 2918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-81\" not found" node="ip-172-31-22-81" Jan 13 23:42:43.279070 kubelet[2918]: E0113 23:42:43.278802 2918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-81\" not found" node="ip-172-31-22-81" Jan 13 23:42:43.336380 update_engine[1948]: I20260113 23:42:43.336285 1948 update_attempter.cc:509] Updating boot flags... Jan 13 23:42:44.414947 kubelet[2918]: E0113 23:42:44.414886 2918 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-81\" not found" node="ip-172-31-22-81" Jan 13 23:42:44.556740 kubelet[2918]: I0113 23:42:44.556680 2918 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-81" Jan 13 23:42:44.609400 kubelet[2918]: I0113 23:42:44.608079 2918 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-81" Jan 13 23:42:44.862724 kubelet[2918]: I0113 23:42:44.856083 2918 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-81" Jan 13 23:42:44.919174 kubelet[2918]: I0113 23:42:44.917241 2918 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-81" Jan 13 23:42:44.980374 kubelet[2918]: I0113 23:42:44.979933 2918 apiserver.go:52] "Watching apiserver" Jan 13 23:42:45.010277 kubelet[2918]: I0113 23:42:45.009177 2918 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 13 23:42:47.050181 systemd[1]: Reload requested from client PID 3455 ('systemctl') (unit session-8.scope)... Jan 13 23:42:47.050217 systemd[1]: Reloading... Jan 13 23:42:47.291177 zram_generator::config[3520]: No configuration found. Jan 13 23:42:47.805725 systemd[1]: Reloading finished in 754 ms. Jan 13 23:42:47.875363 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:42:47.895266 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 23:42:47.896184 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:42:47.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:47.896692 systemd[1]: kubelet.service: Consumed 1.795s CPU time, 130.9M memory peak. Jan 13 23:42:47.897756 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 13 23:42:47.897861 kernel: audit: type=1131 audit(1768347767.895:406): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:47.907303 kernel: audit: type=1334 audit(1768347767.902:407): prog-id=120 op=LOAD Jan 13 23:42:47.907424 kernel: audit: type=1334 audit(1768347767.902:408): prog-id=85 op=UNLOAD Jan 13 23:42:47.902000 audit: BPF prog-id=120 op=LOAD Jan 13 23:42:47.902000 audit: BPF prog-id=85 op=UNLOAD Jan 13 23:42:47.902685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:42:47.907000 audit: BPF prog-id=121 op=LOAD Jan 13 23:42:47.912166 kernel: audit: type=1334 audit(1768347767.907:409): prog-id=121 op=LOAD Jan 13 23:42:47.912405 kernel: audit: type=1334 audit(1768347767.909:410): prog-id=73 op=UNLOAD Jan 13 23:42:47.912460 kernel: audit: type=1334 audit(1768347767.909:411): prog-id=122 op=LOAD Jan 13 23:42:47.909000 audit: BPF prog-id=73 op=UNLOAD Jan 13 23:42:47.909000 audit: BPF prog-id=122 op=LOAD Jan 13 23:42:47.913000 audit: BPF prog-id=123 op=LOAD Jan 13 23:42:47.913000 audit: BPF prog-id=74 op=UNLOAD Jan 13 23:42:47.917706 kernel: audit: type=1334 audit(1768347767.913:412): prog-id=123 op=LOAD Jan 13 23:42:47.917850 kernel: audit: type=1334 audit(1768347767.913:413): prog-id=74 op=UNLOAD Jan 13 23:42:47.913000 audit: BPF prog-id=75 op=UNLOAD Jan 13 23:42:47.919621 kernel: audit: type=1334 audit(1768347767.913:414): prog-id=75 op=UNLOAD Jan 13 23:42:47.921389 kernel: audit: type=1334 audit(1768347767.918:415): prog-id=124 op=LOAD Jan 13 23:42:47.918000 audit: BPF prog-id=124 op=LOAD Jan 13 23:42:47.918000 audit: BPF prog-id=77 op=UNLOAD Jan 13 23:42:47.922000 audit: BPF prog-id=125 op=LOAD Jan 13 23:42:47.922000 audit: BPF prog-id=126 op=LOAD Jan 13 23:42:47.922000 audit: BPF prog-id=78 op=UNLOAD Jan 13 23:42:47.922000 audit: BPF prog-id=79 op=UNLOAD Jan 13 23:42:47.927000 audit: BPF prog-id=127 op=LOAD Jan 13 23:42:47.932000 audit: BPF prog-id=80 op=UNLOAD Jan 13 23:42:47.932000 audit: BPF prog-id=128 op=LOAD Jan 13 23:42:47.932000 audit: BPF prog-id=129 op=LOAD Jan 13 23:42:47.932000 audit: BPF prog-id=81 op=UNLOAD Jan 13 23:42:47.932000 audit: BPF prog-id=82 op=UNLOAD Jan 13 23:42:47.933000 audit: BPF prog-id=130 op=LOAD Jan 13 23:42:47.933000 audit: BPF prog-id=131 op=LOAD Jan 13 23:42:47.933000 audit: BPF prog-id=83 op=UNLOAD Jan 13 23:42:47.933000 audit: BPF prog-id=84 op=UNLOAD Jan 13 23:42:47.936000 audit: BPF prog-id=132 op=LOAD Jan 13 23:42:47.936000 audit: BPF prog-id=76 op=UNLOAD Jan 13 23:42:47.940000 audit: BPF prog-id=133 op=LOAD Jan 13 23:42:47.940000 audit: BPF prog-id=86 op=UNLOAD Jan 13 23:42:47.940000 audit: BPF prog-id=134 op=LOAD Jan 13 23:42:47.940000 audit: BPF prog-id=135 op=LOAD Jan 13 23:42:47.940000 audit: BPF prog-id=87 op=UNLOAD Jan 13 23:42:47.940000 audit: BPF prog-id=88 op=UNLOAD Jan 13 23:42:47.944000 audit: BPF prog-id=136 op=LOAD Jan 13 23:42:47.944000 audit: BPF prog-id=89 op=UNLOAD Jan 13 23:42:47.948000 audit: BPF prog-id=137 op=LOAD Jan 13 23:42:47.948000 audit: BPF prog-id=70 op=UNLOAD Jan 13 23:42:47.949000 audit: BPF prog-id=138 op=LOAD Jan 13 23:42:47.949000 audit: BPF prog-id=139 op=LOAD Jan 13 23:42:47.949000 audit: BPF prog-id=71 op=UNLOAD Jan 13 23:42:47.949000 audit: BPF prog-id=72 op=UNLOAD Jan 13 23:42:48.281096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:42:48.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:48.297776 (kubelet)[3562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 23:42:48.400037 kubelet[3562]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:42:48.402170 kubelet[3562]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 13 23:42:48.402170 kubelet[3562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:42:48.402170 kubelet[3562]: I0113 23:42:48.400782 3562 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 23:42:48.418283 kubelet[3562]: I0113 23:42:48.418237 3562 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 13 23:42:48.418444 kubelet[3562]: I0113 23:42:48.418425 3562 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 23:42:48.419011 kubelet[3562]: I0113 23:42:48.418980 3562 server.go:954] "Client rotation is on, will bootstrap in background" Jan 13 23:42:48.421447 kubelet[3562]: I0113 23:42:48.421407 3562 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 23:42:48.427088 kubelet[3562]: I0113 23:42:48.427016 3562 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 23:42:48.442312 kubelet[3562]: I0113 23:42:48.442258 3562 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 13 23:42:48.448035 kubelet[3562]: I0113 23:42:48.447979 3562 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 23:42:48.448515 kubelet[3562]: I0113 23:42:48.448439 3562 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 23:42:48.448829 kubelet[3562]: I0113 23:42:48.448501 3562 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-81","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 23:42:48.449002 kubelet[3562]: I0113 23:42:48.448828 3562 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 23:42:48.449002 kubelet[3562]: I0113 23:42:48.448850 3562 container_manager_linux.go:304] "Creating device plugin manager" Jan 13 23:42:48.449568 kubelet[3562]: I0113 23:42:48.449013 3562 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:42:48.449568 kubelet[3562]: I0113 23:42:48.449471 3562 kubelet.go:446] "Attempting to sync node with API server" Jan 13 23:42:48.449568 kubelet[3562]: I0113 23:42:48.449550 3562 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 23:42:48.451316 kubelet[3562]: I0113 23:42:48.449595 3562 kubelet.go:352] "Adding apiserver pod source" Jan 13 23:42:48.451316 kubelet[3562]: I0113 23:42:48.449615 3562 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 23:42:48.458470 kubelet[3562]: I0113 23:42:48.455818 3562 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 13 23:42:48.458470 kubelet[3562]: I0113 23:42:48.456603 3562 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 23:42:48.461830 kubelet[3562]: I0113 23:42:48.461118 3562 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 13 23:42:48.461830 kubelet[3562]: I0113 23:42:48.461815 3562 server.go:1287] "Started kubelet" Jan 13 23:42:48.467577 kubelet[3562]: I0113 23:42:48.467522 3562 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 23:42:48.482288 kubelet[3562]: I0113 23:42:48.482229 3562 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 23:42:48.492272 kubelet[3562]: I0113 23:42:48.492218 3562 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 13 23:42:48.492798 kubelet[3562]: E0113 23:42:48.492747 3562 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-81\" not found" Jan 13 23:42:48.494010 kubelet[3562]: I0113 23:42:48.493971 3562 server.go:479] "Adding debug handlers to kubelet server" Jan 13 23:42:48.494298 kubelet[3562]: I0113 23:42:48.494263 3562 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 13 23:42:48.494298 kubelet[3562]: I0113 23:42:48.486434 3562 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 23:42:48.494789 kubelet[3562]: I0113 23:42:48.494743 3562 reconciler.go:26] "Reconciler: start to sync state" Jan 13 23:42:48.496256 kubelet[3562]: I0113 23:42:48.482611 3562 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 23:42:48.498043 kubelet[3562]: I0113 23:42:48.497995 3562 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 23:42:48.516163 kubelet[3562]: I0113 23:42:48.515508 3562 factory.go:221] Registration of the systemd container factory successfully Jan 13 23:42:48.516163 kubelet[3562]: I0113 23:42:48.515704 3562 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 23:42:48.522058 kubelet[3562]: E0113 23:42:48.522001 3562 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 23:42:48.531274 kubelet[3562]: I0113 23:42:48.529759 3562 factory.go:221] Registration of the containerd container factory successfully Jan 13 23:42:48.571290 kubelet[3562]: I0113 23:42:48.570347 3562 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 23:42:48.595824 kubelet[3562]: I0113 23:42:48.595401 3562 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 23:42:48.595824 kubelet[3562]: I0113 23:42:48.595463 3562 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 13 23:42:48.595824 kubelet[3562]: I0113 23:42:48.595495 3562 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 13 23:42:48.595824 kubelet[3562]: I0113 23:42:48.595511 3562 kubelet.go:2382] "Starting kubelet main sync loop" Jan 13 23:42:48.595824 kubelet[3562]: E0113 23:42:48.595619 3562 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 23:42:48.695828 kubelet[3562]: E0113 23:42:48.695729 3562 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 23:42:48.708530 kubelet[3562]: I0113 23:42:48.708476 3562 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 13 23:42:48.708530 kubelet[3562]: I0113 23:42:48.708509 3562 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 13 23:42:48.708728 kubelet[3562]: I0113 23:42:48.708547 3562 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:42:48.708863 kubelet[3562]: I0113 23:42:48.708829 3562 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 23:42:48.708920 kubelet[3562]: I0113 23:42:48.708861 3562 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 23:42:48.708920 kubelet[3562]: I0113 23:42:48.708897 3562 policy_none.go:49] "None policy: Start" Jan 13 23:42:48.708920 kubelet[3562]: I0113 23:42:48.708915 3562 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 13 23:42:48.709048 kubelet[3562]: I0113 23:42:48.708934 3562 state_mem.go:35] "Initializing new in-memory state store" Jan 13 23:42:48.709152 kubelet[3562]: I0113 23:42:48.709116 3562 state_mem.go:75] "Updated machine memory state" Jan 13 23:42:48.717515 kubelet[3562]: I0113 23:42:48.717443 3562 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 23:42:48.718214 kubelet[3562]: I0113 23:42:48.718053 3562 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 23:42:48.718519 kubelet[3562]: I0113 23:42:48.718088 3562 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 23:42:48.719031 kubelet[3562]: I0113 23:42:48.718980 3562 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 23:42:48.724019 kubelet[3562]: E0113 23:42:48.723367 3562 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 13 23:42:48.834166 kubelet[3562]: I0113 23:42:48.833288 3562 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-81" Jan 13 23:42:48.850713 kubelet[3562]: I0113 23:42:48.850636 3562 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-22-81" Jan 13 23:42:48.850932 kubelet[3562]: I0113 23:42:48.850793 3562 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-81" Jan 13 23:42:48.898751 kubelet[3562]: I0113 23:42:48.897312 3562 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-81" Jan 13 23:42:48.898751 kubelet[3562]: I0113 23:42:48.897659 3562 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-81" Jan 13 23:42:48.898751 kubelet[3562]: I0113 23:42:48.898091 3562 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-81" Jan 13 23:42:48.913354 kubelet[3562]: E0113 23:42:48.913289 3562 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-81\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-81" Jan 13 23:42:48.914730 kubelet[3562]: E0113 23:42:48.914646 3562 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-81\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-81" Jan 13 23:42:48.914958 kubelet[3562]: E0113 23:42:48.914835 3562 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-22-81\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-22-81" Jan 13 23:42:48.997785 kubelet[3562]: I0113 23:42:48.997714 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19661e3783ae6a5498e7ab32f695904d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-81\" (UID: \"19661e3783ae6a5498e7ab32f695904d\") " pod="kube-system/kube-apiserver-ip-172-31-22-81" Jan 13 23:42:48.997938 kubelet[3562]: I0113 23:42:48.997793 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/77dae837c76ee31eccf813da0ad8d8f1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-81\" (UID: \"77dae837c76ee31eccf813da0ad8d8f1\") " pod="kube-system/kube-controller-manager-ip-172-31-22-81" Jan 13 23:42:48.997938 kubelet[3562]: I0113 23:42:48.997839 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77dae837c76ee31eccf813da0ad8d8f1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-81\" (UID: \"77dae837c76ee31eccf813da0ad8d8f1\") " pod="kube-system/kube-controller-manager-ip-172-31-22-81" Jan 13 23:42:48.997938 kubelet[3562]: I0113 23:42:48.997881 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19661e3783ae6a5498e7ab32f695904d-ca-certs\") pod \"kube-apiserver-ip-172-31-22-81\" (UID: \"19661e3783ae6a5498e7ab32f695904d\") " pod="kube-system/kube-apiserver-ip-172-31-22-81" Jan 13 23:42:48.997938 kubelet[3562]: I0113 23:42:48.997916 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19661e3783ae6a5498e7ab32f695904d-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-81\" (UID: \"19661e3783ae6a5498e7ab32f695904d\") " pod="kube-system/kube-apiserver-ip-172-31-22-81" Jan 13 23:42:48.998156 kubelet[3562]: I0113 23:42:48.997949 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77dae837c76ee31eccf813da0ad8d8f1-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-81\" (UID: \"77dae837c76ee31eccf813da0ad8d8f1\") " pod="kube-system/kube-controller-manager-ip-172-31-22-81" Jan 13 23:42:48.998156 kubelet[3562]: I0113 23:42:48.997983 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/77dae837c76ee31eccf813da0ad8d8f1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-81\" (UID: \"77dae837c76ee31eccf813da0ad8d8f1\") " pod="kube-system/kube-controller-manager-ip-172-31-22-81" Jan 13 23:42:48.998156 kubelet[3562]: I0113 23:42:48.998020 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77dae837c76ee31eccf813da0ad8d8f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-81\" (UID: \"77dae837c76ee31eccf813da0ad8d8f1\") " pod="kube-system/kube-controller-manager-ip-172-31-22-81" Jan 13 23:42:48.998156 kubelet[3562]: I0113 23:42:48.998056 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f36ea20b10c19ade6e0df918b046067-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-81\" (UID: \"4f36ea20b10c19ade6e0df918b046067\") " pod="kube-system/kube-scheduler-ip-172-31-22-81" Jan 13 23:42:49.451377 kubelet[3562]: I0113 23:42:49.450948 3562 apiserver.go:52] "Watching apiserver" Jan 13 23:42:49.494821 kubelet[3562]: I0113 23:42:49.494726 3562 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 13 23:42:49.662161 kubelet[3562]: I0113 23:42:49.659872 3562 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-81" Jan 13 23:42:49.687614 kubelet[3562]: E0113 23:42:49.687442 3562 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-81\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-81" Jan 13 23:42:49.715215 kubelet[3562]: I0113 23:42:49.714939 3562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-81" podStartSLOduration=5.714882127 podStartE2EDuration="5.714882127s" podCreationTimestamp="2026-01-13 23:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:42:49.712346587 +0000 UTC m=+1.405089608" watchObservedRunningTime="2026-01-13 23:42:49.714882127 +0000 UTC m=+1.407625136" Jan 13 23:42:49.763227 kubelet[3562]: I0113 23:42:49.762696 3562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-81" podStartSLOduration=5.762676015 podStartE2EDuration="5.762676015s" podCreationTimestamp="2026-01-13 23:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:42:49.740582191 +0000 UTC m=+1.433325200" watchObservedRunningTime="2026-01-13 23:42:49.762676015 +0000 UTC m=+1.455419012" Jan 13 23:42:49.790556 kubelet[3562]: I0113 23:42:49.790465 3562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-81" podStartSLOduration=5.790441567 podStartE2EDuration="5.790441567s" podCreationTimestamp="2026-01-13 23:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:42:49.765874135 +0000 UTC m=+1.458617156" watchObservedRunningTime="2026-01-13 23:42:49.790441567 +0000 UTC m=+1.483184576" Jan 13 23:42:54.359835 kubelet[3562]: I0113 23:42:54.359776 3562 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 23:42:54.361056 containerd[1990]: time="2026-01-13T23:42:54.360363322Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 23:42:54.363307 kubelet[3562]: I0113 23:42:54.361415 3562 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 23:42:55.136414 kubelet[3562]: I0113 23:42:55.136152 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1836ef3-1cf4-43e4-a51b-7303fa47f4de-xtables-lock\") pod \"kube-proxy-rpw9r\" (UID: \"f1836ef3-1cf4-43e4-a51b-7303fa47f4de\") " pod="kube-system/kube-proxy-rpw9r" Jan 13 23:42:55.136414 kubelet[3562]: I0113 23:42:55.136222 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1836ef3-1cf4-43e4-a51b-7303fa47f4de-kube-proxy\") pod \"kube-proxy-rpw9r\" (UID: \"f1836ef3-1cf4-43e4-a51b-7303fa47f4de\") " pod="kube-system/kube-proxy-rpw9r" Jan 13 23:42:55.136414 kubelet[3562]: I0113 23:42:55.136268 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1836ef3-1cf4-43e4-a51b-7303fa47f4de-lib-modules\") pod \"kube-proxy-rpw9r\" (UID: \"f1836ef3-1cf4-43e4-a51b-7303fa47f4de\") " pod="kube-system/kube-proxy-rpw9r" Jan 13 23:42:55.136414 kubelet[3562]: I0113 23:42:55.136345 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffhtr\" (UniqueName: \"kubernetes.io/projected/f1836ef3-1cf4-43e4-a51b-7303fa47f4de-kube-api-access-ffhtr\") pod \"kube-proxy-rpw9r\" (UID: \"f1836ef3-1cf4-43e4-a51b-7303fa47f4de\") " pod="kube-system/kube-proxy-rpw9r" Jan 13 23:42:55.147348 systemd[1]: Created slice kubepods-besteffort-podf1836ef3_1cf4_43e4_a51b_7303fa47f4de.slice - libcontainer container kubepods-besteffort-podf1836ef3_1cf4_43e4_a51b_7303fa47f4de.slice. Jan 13 23:42:55.462789 containerd[1990]: time="2026-01-13T23:42:55.462625523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rpw9r,Uid:f1836ef3-1cf4-43e4-a51b-7303fa47f4de,Namespace:kube-system,Attempt:0,}" Jan 13 23:42:55.507012 containerd[1990]: time="2026-01-13T23:42:55.506928659Z" level=info msg="connecting to shim 1e6195ba95613c781a84c392e2e3011ce1aec23c5e91e692d81fc87b44cb626d" address="unix:///run/containerd/s/f3eda0e7d6ef17bb8eaaecba4fdd1b18993ad94fd8d5509ad689eb18d7b3fdb4" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:42:55.538909 kubelet[3562]: I0113 23:42:55.538628 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/763e250c-d2d7-4565-84db-0fa178ef7c13-var-lib-calico\") pod \"tigera-operator-7dcd859c48-cw8vb\" (UID: \"763e250c-d2d7-4565-84db-0fa178ef7c13\") " pod="tigera-operator/tigera-operator-7dcd859c48-cw8vb" Jan 13 23:42:55.538909 kubelet[3562]: I0113 23:42:55.538759 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljbwd\" (UniqueName: \"kubernetes.io/projected/763e250c-d2d7-4565-84db-0fa178ef7c13-kube-api-access-ljbwd\") pod \"tigera-operator-7dcd859c48-cw8vb\" (UID: \"763e250c-d2d7-4565-84db-0fa178ef7c13\") " pod="tigera-operator/tigera-operator-7dcd859c48-cw8vb" Jan 13 23:42:55.570922 systemd[1]: Created slice kubepods-besteffort-pod763e250c_d2d7_4565_84db_0fa178ef7c13.slice - libcontainer container kubepods-besteffort-pod763e250c_d2d7_4565_84db_0fa178ef7c13.slice. Jan 13 23:42:55.620521 systemd[1]: Started cri-containerd-1e6195ba95613c781a84c392e2e3011ce1aec23c5e91e692d81fc87b44cb626d.scope - libcontainer container 1e6195ba95613c781a84c392e2e3011ce1aec23c5e91e692d81fc87b44cb626d. Jan 13 23:42:55.645000 audit: BPF prog-id=140 op=LOAD Jan 13 23:42:55.647860 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 13 23:42:55.648270 kernel: audit: type=1334 audit(1768347775.645:448): prog-id=140 op=LOAD Jan 13 23:42:55.646000 audit: BPF prog-id=141 op=LOAD Jan 13 23:42:55.652754 kernel: audit: type=1334 audit(1768347775.646:449): prog-id=141 op=LOAD Jan 13 23:42:55.653340 kernel: audit: type=1300 audit(1768347775.646:449): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3616 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.646000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3616 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.664761 kernel: audit: type=1327 audit(1768347775.646:449): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165363139356261393536313363373831613834633339326532653330 Jan 13 23:42:55.664884 kernel: audit: type=1334 audit(1768347775.648:450): prog-id=141 op=UNLOAD Jan 13 23:42:55.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165363139356261393536313363373831613834633339326532653330 Jan 13 23:42:55.648000 audit: BPF prog-id=141 op=UNLOAD Jan 13 23:42:55.668926 kernel: audit: type=1300 audit(1768347775.648:450): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3616 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.648000 audit[3627]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3616 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.682892 kernel: audit: type=1327 audit(1768347775.648:450): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165363139356261393536313363373831613834633339326532653330 Jan 13 23:42:55.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165363139356261393536313363373831613834633339326532653330 Jan 13 23:42:55.690551 kernel: audit: type=1334 audit(1768347775.648:451): prog-id=142 op=LOAD Jan 13 23:42:55.648000 audit: BPF prog-id=142 op=LOAD Jan 13 23:42:55.700601 kernel: audit: type=1300 audit(1768347775.648:451): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3616 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.648000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3616 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165363139356261393536313363373831613834633339326532653330 Jan 13 23:42:55.711096 kernel: audit: type=1327 audit(1768347775.648:451): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165363139356261393536313363373831613834633339326532653330 Jan 13 23:42:55.648000 audit: BPF prog-id=143 op=LOAD Jan 13 23:42:55.648000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3616 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165363139356261393536313363373831613834633339326532653330 Jan 13 23:42:55.648000 audit: BPF prog-id=143 op=UNLOAD Jan 13 23:42:55.648000 audit[3627]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3616 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165363139356261393536313363373831613834633339326532653330 Jan 13 23:42:55.648000 audit: BPF prog-id=142 op=UNLOAD Jan 13 23:42:55.648000 audit[3627]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3616 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165363139356261393536313363373831613834633339326532653330 Jan 13 23:42:55.648000 audit: BPF prog-id=144 op=LOAD Jan 13 23:42:55.648000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3616 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165363139356261393536313363373831613834633339326532653330 Jan 13 23:42:55.767522 containerd[1990]: time="2026-01-13T23:42:55.767197969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rpw9r,Uid:f1836ef3-1cf4-43e4-a51b-7303fa47f4de,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e6195ba95613c781a84c392e2e3011ce1aec23c5e91e692d81fc87b44cb626d\"" Jan 13 23:42:55.773791 containerd[1990]: time="2026-01-13T23:42:55.773627197Z" level=info msg="CreateContainer within sandbox \"1e6195ba95613c781a84c392e2e3011ce1aec23c5e91e692d81fc87b44cb626d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 23:42:55.792436 containerd[1990]: time="2026-01-13T23:42:55.792368317Z" level=info msg="Container 56f1560973c0e38e6cad7f6de9c833e31edef28404b5b2dd53a6217aa20b6f29: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:42:55.806654 containerd[1990]: time="2026-01-13T23:42:55.806586757Z" level=info msg="CreateContainer within sandbox \"1e6195ba95613c781a84c392e2e3011ce1aec23c5e91e692d81fc87b44cb626d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"56f1560973c0e38e6cad7f6de9c833e31edef28404b5b2dd53a6217aa20b6f29\"" Jan 13 23:42:55.808812 containerd[1990]: time="2026-01-13T23:42:55.808743733Z" level=info msg="StartContainer for \"56f1560973c0e38e6cad7f6de9c833e31edef28404b5b2dd53a6217aa20b6f29\"" Jan 13 23:42:55.812511 containerd[1990]: time="2026-01-13T23:42:55.812407009Z" level=info msg="connecting to shim 56f1560973c0e38e6cad7f6de9c833e31edef28404b5b2dd53a6217aa20b6f29" address="unix:///run/containerd/s/f3eda0e7d6ef17bb8eaaecba4fdd1b18993ad94fd8d5509ad689eb18d7b3fdb4" protocol=ttrpc version=3 Jan 13 23:42:55.847516 systemd[1]: Started cri-containerd-56f1560973c0e38e6cad7f6de9c833e31edef28404b5b2dd53a6217aa20b6f29.scope - libcontainer container 56f1560973c0e38e6cad7f6de9c833e31edef28404b5b2dd53a6217aa20b6f29. Jan 13 23:42:55.889699 containerd[1990]: time="2026-01-13T23:42:55.889418509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cw8vb,Uid:763e250c-d2d7-4565-84db-0fa178ef7c13,Namespace:tigera-operator,Attempt:0,}" Jan 13 23:42:55.925060 containerd[1990]: time="2026-01-13T23:42:55.925006009Z" level=info msg="connecting to shim d42c7ea173d67ae600f6ec0b3bb0f897a85e7c5ea5da9496a11ff3db1c890bb4" address="unix:///run/containerd/s/37cfa305a5ff2b2e2d899bce2f6bbdf9ca155eff281f2112a145d70c6b0945cf" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:42:55.931000 audit: BPF prog-id=145 op=LOAD Jan 13 23:42:55.931000 audit[3653]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3616 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536663135363039373363306533386536636164376636646539633833 Jan 13 23:42:55.931000 audit: BPF prog-id=146 op=LOAD Jan 13 23:42:55.931000 audit[3653]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3616 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536663135363039373363306533386536636164376636646539633833 Jan 13 23:42:55.931000 audit: BPF prog-id=146 op=UNLOAD Jan 13 23:42:55.931000 audit[3653]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3616 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536663135363039373363306533386536636164376636646539633833 Jan 13 23:42:55.931000 audit: BPF prog-id=145 op=UNLOAD Jan 13 23:42:55.931000 audit[3653]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3616 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536663135363039373363306533386536636164376636646539633833 Jan 13 23:42:55.931000 audit: BPF prog-id=147 op=LOAD Jan 13 23:42:55.931000 audit[3653]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3616 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536663135363039373363306533386536636164376636646539633833 Jan 13 23:42:56.004856 systemd[1]: Started cri-containerd-d42c7ea173d67ae600f6ec0b3bb0f897a85e7c5ea5da9496a11ff3db1c890bb4.scope - libcontainer container d42c7ea173d67ae600f6ec0b3bb0f897a85e7c5ea5da9496a11ff3db1c890bb4. Jan 13 23:42:56.009822 containerd[1990]: time="2026-01-13T23:42:56.009743938Z" level=info msg="StartContainer for \"56f1560973c0e38e6cad7f6de9c833e31edef28404b5b2dd53a6217aa20b6f29\" returns successfully" Jan 13 23:42:56.032000 audit: BPF prog-id=148 op=LOAD Jan 13 23:42:56.033000 audit: BPF prog-id=149 op=LOAD Jan 13 23:42:56.033000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3681 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434326337656131373364363761653630306636656330623362623066 Jan 13 23:42:56.033000 audit: BPF prog-id=149 op=UNLOAD Jan 13 23:42:56.033000 audit[3694]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3681 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434326337656131373364363761653630306636656330623362623066 Jan 13 23:42:56.033000 audit: BPF prog-id=150 op=LOAD Jan 13 23:42:56.033000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3681 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434326337656131373364363761653630306636656330623362623066 Jan 13 23:42:56.033000 audit: BPF prog-id=151 op=LOAD Jan 13 23:42:56.033000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3681 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434326337656131373364363761653630306636656330623362623066 Jan 13 23:42:56.034000 audit: BPF prog-id=151 op=UNLOAD Jan 13 23:42:56.034000 audit[3694]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3681 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434326337656131373364363761653630306636656330623362623066 Jan 13 23:42:56.034000 audit: BPF prog-id=150 op=UNLOAD Jan 13 23:42:56.034000 audit[3694]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3681 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434326337656131373364363761653630306636656330623362623066 Jan 13 23:42:56.034000 audit: BPF prog-id=152 op=LOAD Jan 13 23:42:56.034000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3681 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434326337656131373364363761653630306636656330623362623066 Jan 13 23:42:56.105325 containerd[1990]: time="2026-01-13T23:42:56.105242626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cw8vb,Uid:763e250c-d2d7-4565-84db-0fa178ef7c13,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d42c7ea173d67ae600f6ec0b3bb0f897a85e7c5ea5da9496a11ff3db1c890bb4\"" Jan 13 23:42:56.110723 containerd[1990]: time="2026-01-13T23:42:56.110578126Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 13 23:42:56.273853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount666682808.mount: Deactivated successfully. Jan 13 23:42:56.303000 audit[3761]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3761 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.303000 audit[3761]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffdfc3f90 a2=0 a3=1 items=0 ppid=3666 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.303000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 13 23:42:56.306000 audit[3762]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3762 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.306000 audit[3762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd139c280 a2=0 a3=1 items=0 ppid=3666 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.306000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 13 23:42:56.311000 audit[3764]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3764 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.311000 audit[3764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffed1e1b40 a2=0 a3=1 items=0 ppid=3666 pid=3764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.311000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 13 23:42:56.312000 audit[3765]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3765 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.312000 audit[3765]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd2bd2d00 a2=0 a3=1 items=0 ppid=3666 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.312000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 13 23:42:56.319000 audit[3766]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3766 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.319000 audit[3766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe6198410 a2=0 a3=1 items=0 ppid=3666 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.319000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 13 23:42:56.326000 audit[3767]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3767 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.326000 audit[3767]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeb5e47c0 a2=0 a3=1 items=0 ppid=3666 pid=3767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.326000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 13 23:42:56.417000 audit[3768]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3768 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.417000 audit[3768]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdfe54d20 a2=0 a3=1 items=0 ppid=3666 pid=3768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.417000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 13 23:42:56.423000 audit[3770]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3770 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.423000 audit[3770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffea35b860 a2=0 a3=1 items=0 ppid=3666 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.423000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 13 23:42:56.432000 audit[3773]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3773 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.432000 audit[3773]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcb871460 a2=0 a3=1 items=0 ppid=3666 pid=3773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.432000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 13 23:42:56.436000 audit[3774]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3774 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.436000 audit[3774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd870d590 a2=0 a3=1 items=0 ppid=3666 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.436000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 13 23:42:56.449000 audit[3776]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3776 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.449000 audit[3776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff78acbb0 a2=0 a3=1 items=0 ppid=3666 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.449000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 13 23:42:56.451000 audit[3777]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3777 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.451000 audit[3777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7af7030 a2=0 a3=1 items=0 ppid=3666 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.451000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 13 23:42:56.458000 audit[3779]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3779 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.458000 audit[3779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffda450b90 a2=0 a3=1 items=0 ppid=3666 pid=3779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.458000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 13 23:42:56.466000 audit[3782]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3782 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.466000 audit[3782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff8b30770 a2=0 a3=1 items=0 ppid=3666 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.466000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 13 23:42:56.468000 audit[3783]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.468000 audit[3783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff0a871e0 a2=0 a3=1 items=0 ppid=3666 pid=3783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.468000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 13 23:42:56.474000 audit[3785]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3785 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.474000 audit[3785]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffec0aaeb0 a2=0 a3=1 items=0 ppid=3666 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.474000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 13 23:42:56.476000 audit[3786]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.476000 audit[3786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc79cda50 a2=0 a3=1 items=0 ppid=3666 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.476000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 13 23:42:56.482000 audit[3788]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.482000 audit[3788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd13ae6b0 a2=0 a3=1 items=0 ppid=3666 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.482000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 13 23:42:56.490000 audit[3791]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3791 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.490000 audit[3791]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffebdd1b70 a2=0 a3=1 items=0 ppid=3666 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 13 23:42:56.501000 audit[3794]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3794 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.501000 audit[3794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffffc61ed0 a2=0 a3=1 items=0 ppid=3666 pid=3794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.501000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 13 23:42:56.503000 audit[3795]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3795 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.503000 audit[3795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd2254690 a2=0 a3=1 items=0 ppid=3666 pid=3795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.503000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 13 23:42:56.509000 audit[3797]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3797 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.509000 audit[3797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffdc1923b0 a2=0 a3=1 items=0 ppid=3666 pid=3797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:42:56.516000 audit[3800]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.516000 audit[3800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc8ac7370 a2=0 a3=1 items=0 ppid=3666 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:42:56.519000 audit[3801]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3801 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.519000 audit[3801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff3d6e9a0 a2=0 a3=1 items=0 ppid=3666 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.519000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 13 23:42:56.525000 audit[3803]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3803 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:42:56.525000 audit[3803]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff7518cb0 a2=0 a3=1 items=0 ppid=3666 pid=3803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.525000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 13 23:42:56.565000 audit[3809]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:42:56.565000 audit[3809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdc471780 a2=0 a3=1 items=0 ppid=3666 pid=3809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.565000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:42:56.576000 audit[3809]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:42:56.576000 audit[3809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffdc471780 a2=0 a3=1 items=0 ppid=3666 pid=3809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.576000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:42:56.579000 audit[3814]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3814 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.579000 audit[3814]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe66cb560 a2=0 a3=1 items=0 ppid=3666 pid=3814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.579000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 13 23:42:56.585000 audit[3816]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3816 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.585000 audit[3816]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe60503a0 a2=0 a3=1 items=0 ppid=3666 pid=3816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 13 23:42:56.593000 audit[3819]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3819 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.593000 audit[3819]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffdee62cb0 a2=0 a3=1 items=0 ppid=3666 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.593000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 13 23:42:56.596000 audit[3820]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3820 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.596000 audit[3820]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff7c15f60 a2=0 a3=1 items=0 ppid=3666 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.596000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 13 23:42:56.603000 audit[3822]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3822 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.603000 audit[3822]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd3cc3a70 a2=0 a3=1 items=0 ppid=3666 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.603000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 13 23:42:56.607000 audit[3823]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3823 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.607000 audit[3823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc214c710 a2=0 a3=1 items=0 ppid=3666 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.607000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 13 23:42:56.613000 audit[3825]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3825 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.613000 audit[3825]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd2a6a890 a2=0 a3=1 items=0 ppid=3666 pid=3825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.613000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 13 23:42:56.621000 audit[3828]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3828 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.621000 audit[3828]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd73ed710 a2=0 a3=1 items=0 ppid=3666 pid=3828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.621000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 13 23:42:56.623000 audit[3829]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3829 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.623000 audit[3829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7697110 a2=0 a3=1 items=0 ppid=3666 pid=3829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.623000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 13 23:42:56.629000 audit[3831]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3831 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.629000 audit[3831]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffff761ee0 a2=0 a3=1 items=0 ppid=3666 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.629000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 13 23:42:56.632000 audit[3832]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3832 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.632000 audit[3832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe8ded060 a2=0 a3=1 items=0 ppid=3666 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.632000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 13 23:42:56.637000 audit[3834]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3834 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.637000 audit[3834]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffea9d64c0 a2=0 a3=1 items=0 ppid=3666 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.637000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 13 23:42:56.646000 audit[3837]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3837 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.646000 audit[3837]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffa1a6e40 a2=0 a3=1 items=0 ppid=3666 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.646000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 13 23:42:56.655000 audit[3840]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3840 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.655000 audit[3840]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd30beb90 a2=0 a3=1 items=0 ppid=3666 pid=3840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.655000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 13 23:42:56.657000 audit[3841]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3841 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.657000 audit[3841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd7bcc200 a2=0 a3=1 items=0 ppid=3666 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.657000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 13 23:42:56.663000 audit[3843]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3843 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.663000 audit[3843]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc0496ae0 a2=0 a3=1 items=0 ppid=3666 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.663000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:42:56.672000 audit[3846]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3846 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.672000 audit[3846]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcc560d70 a2=0 a3=1 items=0 ppid=3666 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.672000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:42:56.674000 audit[3847]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3847 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.674000 audit[3847]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffefb7d030 a2=0 a3=1 items=0 ppid=3666 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.674000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 13 23:42:56.689000 audit[3849]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3849 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.689000 audit[3849]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffdb01ee90 a2=0 a3=1 items=0 ppid=3666 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.689000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 13 23:42:56.695000 audit[3850]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3850 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.695000 audit[3850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc17ffb0 a2=0 a3=1 items=0 ppid=3666 pid=3850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.695000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 13 23:42:56.707000 audit[3852]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3852 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.707000 audit[3852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcfa09060 a2=0 a3=1 items=0 ppid=3666 pid=3852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.707000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:42:56.716358 kubelet[3562]: I0113 23:42:56.716123 3562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rpw9r" podStartSLOduration=1.716097205 podStartE2EDuration="1.716097205s" podCreationTimestamp="2026-01-13 23:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:42:56.715532173 +0000 UTC m=+8.408275206" watchObservedRunningTime="2026-01-13 23:42:56.716097205 +0000 UTC m=+8.408840214" Jan 13 23:42:56.727000 audit[3855]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3855 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:42:56.727000 audit[3855]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe9cfd600 a2=0 a3=1 items=0 ppid=3666 pid=3855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.727000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:42:56.735000 audit[3857]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3857 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 13 23:42:56.735000 audit[3857]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffe0ce8f40 a2=0 a3=1 items=0 ppid=3666 pid=3857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.735000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:42:56.736000 audit[3857]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3857 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 13 23:42:56.736000 audit[3857]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffe0ce8f40 a2=0 a3=1 items=0 ppid=3666 pid=3857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:56.736000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:42:57.867695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3698704146.mount: Deactivated successfully. Jan 13 23:42:59.710196 containerd[1990]: time="2026-01-13T23:42:59.710075452Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:59.716373 containerd[1990]: time="2026-01-13T23:42:59.714784840Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 13 23:42:59.717820 containerd[1990]: time="2026-01-13T23:42:59.717736408Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:59.725190 containerd[1990]: time="2026-01-13T23:42:59.724579300Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:42:59.726442 containerd[1990]: time="2026-01-13T23:42:59.726380080Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.615735438s" Jan 13 23:42:59.726543 containerd[1990]: time="2026-01-13T23:42:59.726441352Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 13 23:42:59.732021 containerd[1990]: time="2026-01-13T23:42:59.731936944Z" level=info msg="CreateContainer within sandbox \"d42c7ea173d67ae600f6ec0b3bb0f897a85e7c5ea5da9496a11ff3db1c890bb4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 23:42:59.754201 containerd[1990]: time="2026-01-13T23:42:59.754103452Z" level=info msg="Container 3850581de1f61303d3e6b0cce6dca349c84a1bcc90a8fb5c33eb5540c7cf61ae: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:42:59.774714 containerd[1990]: time="2026-01-13T23:42:59.774628385Z" level=info msg="CreateContainer within sandbox \"d42c7ea173d67ae600f6ec0b3bb0f897a85e7c5ea5da9496a11ff3db1c890bb4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3850581de1f61303d3e6b0cce6dca349c84a1bcc90a8fb5c33eb5540c7cf61ae\"" Jan 13 23:42:59.776318 containerd[1990]: time="2026-01-13T23:42:59.776239841Z" level=info msg="StartContainer for \"3850581de1f61303d3e6b0cce6dca349c84a1bcc90a8fb5c33eb5540c7cf61ae\"" Jan 13 23:42:59.779015 containerd[1990]: time="2026-01-13T23:42:59.778485401Z" level=info msg="connecting to shim 3850581de1f61303d3e6b0cce6dca349c84a1bcc90a8fb5c33eb5540c7cf61ae" address="unix:///run/containerd/s/37cfa305a5ff2b2e2d899bce2f6bbdf9ca155eff281f2112a145d70c6b0945cf" protocol=ttrpc version=3 Jan 13 23:42:59.822469 systemd[1]: Started cri-containerd-3850581de1f61303d3e6b0cce6dca349c84a1bcc90a8fb5c33eb5540c7cf61ae.scope - libcontainer container 3850581de1f61303d3e6b0cce6dca349c84a1bcc90a8fb5c33eb5540c7cf61ae. Jan 13 23:42:59.847000 audit: BPF prog-id=153 op=LOAD Jan 13 23:42:59.848000 audit: BPF prog-id=154 op=LOAD Jan 13 23:42:59.848000 audit[3866]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=3681 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:59.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338353035383164653166363133303364336536623063636536646361 Jan 13 23:42:59.848000 audit: BPF prog-id=154 op=UNLOAD Jan 13 23:42:59.848000 audit[3866]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3681 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:59.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338353035383164653166363133303364336536623063636536646361 Jan 13 23:42:59.848000 audit: BPF prog-id=155 op=LOAD Jan 13 23:42:59.848000 audit[3866]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=3681 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:59.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338353035383164653166363133303364336536623063636536646361 Jan 13 23:42:59.848000 audit: BPF prog-id=156 op=LOAD Jan 13 23:42:59.848000 audit[3866]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=3681 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:59.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338353035383164653166363133303364336536623063636536646361 Jan 13 23:42:59.848000 audit: BPF prog-id=156 op=UNLOAD Jan 13 23:42:59.848000 audit[3866]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3681 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:59.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338353035383164653166363133303364336536623063636536646361 Jan 13 23:42:59.848000 audit: BPF prog-id=155 op=UNLOAD Jan 13 23:42:59.848000 audit[3866]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3681 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:59.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338353035383164653166363133303364336536623063636536646361 Jan 13 23:42:59.848000 audit: BPF prog-id=157 op=LOAD Jan 13 23:42:59.848000 audit[3866]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=3681 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:59.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338353035383164653166363133303364336536623063636536646361 Jan 13 23:42:59.892811 containerd[1990]: time="2026-01-13T23:42:59.891738569Z" level=info msg="StartContainer for \"3850581de1f61303d3e6b0cce6dca349c84a1bcc90a8fb5c33eb5540c7cf61ae\" returns successfully" Jan 13 23:43:08.906542 sudo[2339]: pam_unix(sudo:session): session closed for user root Jan 13 23:43:08.915113 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 13 23:43:08.915352 kernel: audit: type=1106 audit(1768347788.905:528): pid=2339 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:43:08.905000 audit[2339]: USER_END pid=2339 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:43:08.905000 audit[2339]: CRED_DISP pid=2339 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:43:08.926150 kernel: audit: type=1104 audit(1768347788.905:529): pid=2339 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:43:08.992035 sshd[2338]: Connection closed by 20.161.92.111 port 39130 Jan 13 23:43:08.992522 sshd-session[2334]: pam_unix(sshd:session): session closed for user core Jan 13 23:43:08.997000 audit[2334]: USER_END pid=2334 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:43:09.013864 systemd[1]: sshd@6-172.31.22.81:22-20.161.92.111:39130.service: Deactivated successfully. Jan 13 23:43:08.997000 audit[2334]: CRED_DISP pid=2334 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:43:09.019607 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 23:43:09.020217 systemd[1]: session-8.scope: Consumed 11.407s CPU time, 225.2M memory peak. Jan 13 23:43:09.021349 kernel: audit: type=1106 audit(1768347788.997:530): pid=2334 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:43:09.021881 kernel: audit: type=1104 audit(1768347788.997:531): pid=2334 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:43:09.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.22.81:22-20.161.92.111:39130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:43:09.028582 kernel: audit: type=1131 audit(1768347789.013:532): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.22.81:22-20.161.92.111:39130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:43:09.025898 systemd-logind[1947]: Session 8 logged out. Waiting for processes to exit. Jan 13 23:43:09.033630 systemd-logind[1947]: Removed session 8. Jan 13 23:43:12.886000 audit[3953]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:12.886000 audit[3953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffe28eb40 a2=0 a3=1 items=0 ppid=3666 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:12.900209 kernel: audit: type=1325 audit(1768347792.886:533): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:12.900352 kernel: audit: type=1300 audit(1768347792.886:533): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffe28eb40 a2=0 a3=1 items=0 ppid=3666 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:12.886000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:12.904506 kernel: audit: type=1327 audit(1768347792.886:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:12.904000 audit[3953]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:12.904000 audit[3953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe28eb40 a2=0 a3=1 items=0 ppid=3666 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:12.920442 kernel: audit: type=1325 audit(1768347792.904:534): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:12.920591 kernel: audit: type=1300 audit(1768347792.904:534): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe28eb40 a2=0 a3=1 items=0 ppid=3666 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:12.904000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:12.940000 audit[3955]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3955 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:12.940000 audit[3955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc181d300 a2=0 a3=1 items=0 ppid=3666 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:12.940000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:12.947000 audit[3955]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3955 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:12.947000 audit[3955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc181d300 a2=0 a3=1 items=0 ppid=3666 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:12.947000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:20.989000 audit[3958]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3958 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:20.991781 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 13 23:43:20.991903 kernel: audit: type=1325 audit(1768347800.989:537): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3958 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:20.989000 audit[3958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffba20310 a2=0 a3=1 items=0 ppid=3666 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:21.002754 kernel: audit: type=1300 audit(1768347800.989:537): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffba20310 a2=0 a3=1 items=0 ppid=3666 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:20.989000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:21.009260 kernel: audit: type=1327 audit(1768347800.989:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:21.003000 audit[3958]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3958 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:21.013512 kernel: audit: type=1325 audit(1768347801.003:538): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3958 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:21.003000 audit[3958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffba20310 a2=0 a3=1 items=0 ppid=3666 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:21.003000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:21.045755 kernel: audit: type=1300 audit(1768347801.003:538): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffba20310 a2=0 a3=1 items=0 ppid=3666 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:21.045904 kernel: audit: type=1327 audit(1768347801.003:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:21.261000 audit[3960]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3960 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:21.261000 audit[3960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff67c7470 a2=0 a3=1 items=0 ppid=3666 pid=3960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:21.275000 kernel: audit: type=1325 audit(1768347801.261:539): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3960 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:21.275150 kernel: audit: type=1300 audit(1768347801.261:539): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff67c7470 a2=0 a3=1 items=0 ppid=3666 pid=3960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:21.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:21.283971 kernel: audit: type=1327 audit(1768347801.261:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:21.278000 audit[3960]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3960 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:21.287880 kernel: audit: type=1325 audit(1768347801.278:540): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3960 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:21.278000 audit[3960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff67c7470 a2=0 a3=1 items=0 ppid=3666 pid=3960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:21.278000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:22.310000 audit[3962]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3962 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:22.310000 audit[3962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffded45cd0 a2=0 a3=1 items=0 ppid=3666 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:22.310000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:22.317000 audit[3962]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3962 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:22.317000 audit[3962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffded45cd0 a2=0 a3=1 items=0 ppid=3666 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:22.317000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:24.926000 audit[3964]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3964 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:24.926000 audit[3964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd4bcaab0 a2=0 a3=1 items=0 ppid=3666 pid=3964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:24.926000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:24.940000 audit[3964]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3964 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:24.940000 audit[3964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd4bcaab0 a2=0 a3=1 items=0 ppid=3666 pid=3964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:24.940000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:24.989815 kubelet[3562]: I0113 23:43:24.989656 3562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-cw8vb" podStartSLOduration=26.370692192 podStartE2EDuration="29.989350698s" podCreationTimestamp="2026-01-13 23:42:55 +0000 UTC" firstStartedPulling="2026-01-13 23:42:56.10938619 +0000 UTC m=+7.802129187" lastFinishedPulling="2026-01-13 23:42:59.728044696 +0000 UTC m=+11.420787693" observedRunningTime="2026-01-13 23:43:00.722726273 +0000 UTC m=+12.415469306" watchObservedRunningTime="2026-01-13 23:43:24.989350698 +0000 UTC m=+36.682093707" Jan 13 23:43:25.014757 systemd[1]: Created slice kubepods-besteffort-poddfe68ce6_1980_4e25_a1fd_861e32fb1d7a.slice - libcontainer container kubepods-besteffort-poddfe68ce6_1980_4e25_a1fd_861e32fb1d7a.slice. Jan 13 23:43:25.030000 audit[3966]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3966 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:25.030000 audit[3966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd10c2750 a2=0 a3=1 items=0 ppid=3666 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:25.030000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:25.034000 audit[3966]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3966 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:25.034000 audit[3966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd10c2750 a2=0 a3=1 items=0 ppid=3666 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:25.034000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:25.044070 kubelet[3562]: I0113 23:43:25.044009 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dfe68ce6-1980-4e25-a1fd-861e32fb1d7a-typha-certs\") pod \"calico-typha-698f88cd94-mvtxs\" (UID: \"dfe68ce6-1980-4e25-a1fd-861e32fb1d7a\") " pod="calico-system/calico-typha-698f88cd94-mvtxs" Jan 13 23:43:25.044265 kubelet[3562]: I0113 23:43:25.044082 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7n5h\" (UniqueName: \"kubernetes.io/projected/dfe68ce6-1980-4e25-a1fd-861e32fb1d7a-kube-api-access-x7n5h\") pod \"calico-typha-698f88cd94-mvtxs\" (UID: \"dfe68ce6-1980-4e25-a1fd-861e32fb1d7a\") " pod="calico-system/calico-typha-698f88cd94-mvtxs" Jan 13 23:43:25.044265 kubelet[3562]: I0113 23:43:25.044158 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfe68ce6-1980-4e25-a1fd-861e32fb1d7a-tigera-ca-bundle\") pod \"calico-typha-698f88cd94-mvtxs\" (UID: \"dfe68ce6-1980-4e25-a1fd-861e32fb1d7a\") " pod="calico-system/calico-typha-698f88cd94-mvtxs" Jan 13 23:43:25.192863 systemd[1]: Created slice kubepods-besteffort-pod43cf10fa_495e_413a_b74d_6f6834d495e8.slice - libcontainer container kubepods-besteffort-pod43cf10fa_495e_413a_b74d_6f6834d495e8.slice. Jan 13 23:43:25.246373 kubelet[3562]: I0113 23:43:25.246303 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/43cf10fa-495e-413a-b74d-6f6834d495e8-node-certs\") pod \"calico-node-sntjv\" (UID: \"43cf10fa-495e-413a-b74d-6f6834d495e8\") " pod="calico-system/calico-node-sntjv" Jan 13 23:43:25.246588 kubelet[3562]: I0113 23:43:25.246383 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/43cf10fa-495e-413a-b74d-6f6834d495e8-var-run-calico\") pod \"calico-node-sntjv\" (UID: \"43cf10fa-495e-413a-b74d-6f6834d495e8\") " pod="calico-system/calico-node-sntjv" Jan 13 23:43:25.246588 kubelet[3562]: I0113 23:43:25.246422 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/43cf10fa-495e-413a-b74d-6f6834d495e8-flexvol-driver-host\") pod \"calico-node-sntjv\" (UID: \"43cf10fa-495e-413a-b74d-6f6834d495e8\") " pod="calico-system/calico-node-sntjv" Jan 13 23:43:25.246588 kubelet[3562]: I0113 23:43:25.246457 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43cf10fa-495e-413a-b74d-6f6834d495e8-tigera-ca-bundle\") pod \"calico-node-sntjv\" (UID: \"43cf10fa-495e-413a-b74d-6f6834d495e8\") " pod="calico-system/calico-node-sntjv" Jan 13 23:43:25.246588 kubelet[3562]: I0113 23:43:25.246496 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkmlr\" (UniqueName: \"kubernetes.io/projected/43cf10fa-495e-413a-b74d-6f6834d495e8-kube-api-access-jkmlr\") pod \"calico-node-sntjv\" (UID: \"43cf10fa-495e-413a-b74d-6f6834d495e8\") " pod="calico-system/calico-node-sntjv" Jan 13 23:43:25.246588 kubelet[3562]: I0113 23:43:25.246537 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/43cf10fa-495e-413a-b74d-6f6834d495e8-policysync\") pod \"calico-node-sntjv\" (UID: \"43cf10fa-495e-413a-b74d-6f6834d495e8\") " pod="calico-system/calico-node-sntjv" Jan 13 23:43:25.247022 kubelet[3562]: I0113 23:43:25.246576 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43cf10fa-495e-413a-b74d-6f6834d495e8-xtables-lock\") pod \"calico-node-sntjv\" (UID: \"43cf10fa-495e-413a-b74d-6f6834d495e8\") " pod="calico-system/calico-node-sntjv" Jan 13 23:43:25.247022 kubelet[3562]: I0113 23:43:25.246619 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43cf10fa-495e-413a-b74d-6f6834d495e8-lib-modules\") pod \"calico-node-sntjv\" (UID: \"43cf10fa-495e-413a-b74d-6f6834d495e8\") " pod="calico-system/calico-node-sntjv" Jan 13 23:43:25.247022 kubelet[3562]: I0113 23:43:25.246655 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/43cf10fa-495e-413a-b74d-6f6834d495e8-var-lib-calico\") pod \"calico-node-sntjv\" (UID: \"43cf10fa-495e-413a-b74d-6f6834d495e8\") " pod="calico-system/calico-node-sntjv" Jan 13 23:43:25.247022 kubelet[3562]: I0113 23:43:25.246691 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/43cf10fa-495e-413a-b74d-6f6834d495e8-cni-bin-dir\") pod \"calico-node-sntjv\" (UID: \"43cf10fa-495e-413a-b74d-6f6834d495e8\") " pod="calico-system/calico-node-sntjv" Jan 13 23:43:25.247022 kubelet[3562]: I0113 23:43:25.246726 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/43cf10fa-495e-413a-b74d-6f6834d495e8-cni-log-dir\") pod \"calico-node-sntjv\" (UID: \"43cf10fa-495e-413a-b74d-6f6834d495e8\") " pod="calico-system/calico-node-sntjv" Jan 13 23:43:25.247518 kubelet[3562]: I0113 23:43:25.246772 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/43cf10fa-495e-413a-b74d-6f6834d495e8-cni-net-dir\") pod \"calico-node-sntjv\" (UID: \"43cf10fa-495e-413a-b74d-6f6834d495e8\") " pod="calico-system/calico-node-sntjv" Jan 13 23:43:25.303384 kubelet[3562]: E0113 23:43:25.303281 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:43:25.325309 containerd[1990]: time="2026-01-13T23:43:25.325208331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-698f88cd94-mvtxs,Uid:dfe68ce6-1980-4e25-a1fd-861e32fb1d7a,Namespace:calico-system,Attempt:0,}" Jan 13 23:43:25.347256 kubelet[3562]: I0113 23:43:25.347080 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksj5f\" (UniqueName: \"kubernetes.io/projected/884e12a9-b4d3-4695-bc91-5cdf1a464d0b-kube-api-access-ksj5f\") pod \"csi-node-driver-lblk8\" (UID: \"884e12a9-b4d3-4695-bc91-5cdf1a464d0b\") " pod="calico-system/csi-node-driver-lblk8" Jan 13 23:43:25.349338 kubelet[3562]: I0113 23:43:25.349228 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/884e12a9-b4d3-4695-bc91-5cdf1a464d0b-registration-dir\") pod \"csi-node-driver-lblk8\" (UID: \"884e12a9-b4d3-4695-bc91-5cdf1a464d0b\") " pod="calico-system/csi-node-driver-lblk8" Jan 13 23:43:25.350278 kubelet[3562]: I0113 23:43:25.350087 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/884e12a9-b4d3-4695-bc91-5cdf1a464d0b-varrun\") pod \"csi-node-driver-lblk8\" (UID: \"884e12a9-b4d3-4695-bc91-5cdf1a464d0b\") " pod="calico-system/csi-node-driver-lblk8" Jan 13 23:43:25.351713 kubelet[3562]: I0113 23:43:25.350690 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/884e12a9-b4d3-4695-bc91-5cdf1a464d0b-kubelet-dir\") pod \"csi-node-driver-lblk8\" (UID: \"884e12a9-b4d3-4695-bc91-5cdf1a464d0b\") " pod="calico-system/csi-node-driver-lblk8" Jan 13 23:43:25.351920 kubelet[3562]: I0113 23:43:25.351882 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/884e12a9-b4d3-4695-bc91-5cdf1a464d0b-socket-dir\") pod \"csi-node-driver-lblk8\" (UID: \"884e12a9-b4d3-4695-bc91-5cdf1a464d0b\") " pod="calico-system/csi-node-driver-lblk8" Jan 13 23:43:25.357033 kubelet[3562]: E0113 23:43:25.356970 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.357353 kubelet[3562]: W0113 23:43:25.357251 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.357537 kubelet[3562]: E0113 23:43:25.357299 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.359150 kubelet[3562]: E0113 23:43:25.359066 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.360618 kubelet[3562]: W0113 23:43:25.359114 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.360618 kubelet[3562]: E0113 23:43:25.360474 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.365168 kubelet[3562]: E0113 23:43:25.364524 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.365168 kubelet[3562]: W0113 23:43:25.364567 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.368616 kubelet[3562]: E0113 23:43:25.364601 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.368616 kubelet[3562]: E0113 23:43:25.367420 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.368616 kubelet[3562]: W0113 23:43:25.367449 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.368616 kubelet[3562]: E0113 23:43:25.367481 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.372641 kubelet[3562]: E0113 23:43:25.372545 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.372641 kubelet[3562]: W0113 23:43:25.372587 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.372641 kubelet[3562]: E0113 23:43:25.372665 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.382337 kubelet[3562]: E0113 23:43:25.381839 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.382337 kubelet[3562]: W0113 23:43:25.381873 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.382337 kubelet[3562]: E0113 23:43:25.381930 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.392527 kubelet[3562]: E0113 23:43:25.388597 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.392527 kubelet[3562]: W0113 23:43:25.388640 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.392527 kubelet[3562]: E0113 23:43:25.388673 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.427642 kubelet[3562]: E0113 23:43:25.427591 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.427642 kubelet[3562]: W0113 23:43:25.427630 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.427867 kubelet[3562]: E0113 23:43:25.427661 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.438093 containerd[1990]: time="2026-01-13T23:43:25.437858488Z" level=info msg="connecting to shim ce428d2a25dc6ef362b0794ea1574c349f8eec1ed0790a8a64dacbcd0fb3bb8c" address="unix:///run/containerd/s/fe7f961e5d9dcfcc2abc4ea6a8bd26c9b3218d193656cb48e8c2463ffe90d3e7" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:43:25.456025 kubelet[3562]: E0113 23:43:25.455883 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.456255 kubelet[3562]: W0113 23:43:25.456223 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.456774 kubelet[3562]: E0113 23:43:25.456693 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.460507 kubelet[3562]: E0113 23:43:25.460455 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.460794 kubelet[3562]: W0113 23:43:25.460667 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.462872 kubelet[3562]: E0113 23:43:25.462761 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.463478 kubelet[3562]: E0113 23:43:25.463416 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.463478 kubelet[3562]: W0113 23:43:25.463463 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.463694 kubelet[3562]: E0113 23:43:25.463516 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.465392 kubelet[3562]: E0113 23:43:25.465325 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.465392 kubelet[3562]: W0113 23:43:25.465366 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.465937 kubelet[3562]: E0113 23:43:25.465877 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.467737 kubelet[3562]: E0113 23:43:25.467272 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.467737 kubelet[3562]: W0113 23:43:25.467312 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.467737 kubelet[3562]: E0113 23:43:25.467632 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.468377 kubelet[3562]: E0113 23:43:25.468318 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.468377 kubelet[3562]: W0113 23:43:25.468357 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.468563 kubelet[3562]: E0113 23:43:25.468484 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.469618 kubelet[3562]: E0113 23:43:25.469560 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.469618 kubelet[3562]: W0113 23:43:25.469603 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.470151 kubelet[3562]: E0113 23:43:25.470069 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.473239 kubelet[3562]: E0113 23:43:25.471628 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.473239 kubelet[3562]: W0113 23:43:25.471669 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.473239 kubelet[3562]: E0113 23:43:25.473218 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.473497 kubelet[3562]: E0113 23:43:25.473459 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.473497 kubelet[3562]: W0113 23:43:25.473480 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.473710 kubelet[3562]: E0113 23:43:25.473548 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.474601 kubelet[3562]: E0113 23:43:25.474418 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.474601 kubelet[3562]: W0113 23:43:25.474467 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.474601 kubelet[3562]: E0113 23:43:25.474547 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.476146 kubelet[3562]: E0113 23:43:25.476054 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.476146 kubelet[3562]: W0113 23:43:25.476105 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.476468 kubelet[3562]: E0113 23:43:25.476424 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.477602 kubelet[3562]: E0113 23:43:25.477541 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.477602 kubelet[3562]: W0113 23:43:25.477587 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.478079 kubelet[3562]: E0113 23:43:25.477835 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.479159 kubelet[3562]: E0113 23:43:25.478932 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.479159 kubelet[3562]: W0113 23:43:25.478975 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.479564 kubelet[3562]: E0113 23:43:25.479455 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.480352 kubelet[3562]: E0113 23:43:25.480298 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.480352 kubelet[3562]: W0113 23:43:25.480340 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.480625 kubelet[3562]: E0113 23:43:25.480574 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.481992 kubelet[3562]: E0113 23:43:25.481560 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.481992 kubelet[3562]: W0113 23:43:25.481601 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.481992 kubelet[3562]: E0113 23:43:25.481842 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.484053 kubelet[3562]: E0113 23:43:25.482422 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.484053 kubelet[3562]: W0113 23:43:25.482445 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.484053 kubelet[3562]: E0113 23:43:25.483668 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.484053 kubelet[3562]: W0113 23:43:25.483695 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.485016 kubelet[3562]: E0113 23:43:25.484566 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.485016 kubelet[3562]: E0113 23:43:25.484640 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.485430 kubelet[3562]: E0113 23:43:25.485384 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.485430 kubelet[3562]: W0113 23:43:25.485426 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.486157 kubelet[3562]: E0113 23:43:25.485796 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.486157 kubelet[3562]: E0113 23:43:25.486001 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.486157 kubelet[3562]: W0113 23:43:25.486020 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.486157 kubelet[3562]: E0113 23:43:25.486074 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.487419 kubelet[3562]: E0113 23:43:25.487316 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.487419 kubelet[3562]: W0113 23:43:25.487357 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.487745 kubelet[3562]: E0113 23:43:25.487664 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.489064 kubelet[3562]: E0113 23:43:25.487807 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.489064 kubelet[3562]: W0113 23:43:25.487836 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.489064 kubelet[3562]: E0113 23:43:25.488748 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.490081 kubelet[3562]: E0113 23:43:25.490026 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.490081 kubelet[3562]: W0113 23:43:25.490068 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.490609 kubelet[3562]: E0113 23:43:25.490405 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.491433 kubelet[3562]: E0113 23:43:25.490614 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.491433 kubelet[3562]: W0113 23:43:25.490634 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.491433 kubelet[3562]: E0113 23:43:25.491058 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.492039 kubelet[3562]: E0113 23:43:25.491922 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.492039 kubelet[3562]: W0113 23:43:25.491963 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.492039 kubelet[3562]: E0113 23:43:25.492010 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.493064 kubelet[3562]: E0113 23:43:25.493005 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.493064 kubelet[3562]: W0113 23:43:25.493047 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.493696 kubelet[3562]: E0113 23:43:25.493080 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.526473 containerd[1990]: time="2026-01-13T23:43:25.526404496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sntjv,Uid:43cf10fa-495e-413a-b74d-6f6834d495e8,Namespace:calico-system,Attempt:0,}" Jan 13 23:43:25.529925 systemd[1]: Started cri-containerd-ce428d2a25dc6ef362b0794ea1574c349f8eec1ed0790a8a64dacbcd0fb3bb8c.scope - libcontainer container ce428d2a25dc6ef362b0794ea1574c349f8eec1ed0790a8a64dacbcd0fb3bb8c. Jan 13 23:43:25.566158 kubelet[3562]: E0113 23:43:25.565553 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:25.566470 kubelet[3562]: W0113 23:43:25.566360 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:25.566470 kubelet[3562]: E0113 23:43:25.566409 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:25.621587 containerd[1990]: time="2026-01-13T23:43:25.621517721Z" level=info msg="connecting to shim 93642209e4428bf1be11addb51df0ac0e43d0368ec975c85a74d06cd4d98d96e" address="unix:///run/containerd/s/657bb21d44d31fa18af87af1acd163a11d9d50304bb10b0a10b5a59560a67cb1" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:43:25.621000 audit: BPF prog-id=158 op=LOAD Jan 13 23:43:25.622000 audit: BPF prog-id=159 op=LOAD Jan 13 23:43:25.622000 audit[4007]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3987 pid=4007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:25.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365343238643261323564633665663336326230373934656131353734 Jan 13 23:43:25.623000 audit: BPF prog-id=159 op=UNLOAD Jan 13 23:43:25.623000 audit[4007]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3987 pid=4007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:25.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365343238643261323564633665663336326230373934656131353734 Jan 13 23:43:25.623000 audit: BPF prog-id=160 op=LOAD Jan 13 23:43:25.623000 audit[4007]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3987 pid=4007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:25.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365343238643261323564633665663336326230373934656131353734 Jan 13 23:43:25.624000 audit: BPF prog-id=161 op=LOAD Jan 13 23:43:25.624000 audit[4007]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3987 pid=4007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:25.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365343238643261323564633665663336326230373934656131353734 Jan 13 23:43:25.626000 audit: BPF prog-id=161 op=UNLOAD Jan 13 23:43:25.626000 audit[4007]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3987 pid=4007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:25.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365343238643261323564633665663336326230373934656131353734 Jan 13 23:43:25.626000 audit: BPF prog-id=160 op=UNLOAD Jan 13 23:43:25.626000 audit[4007]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3987 pid=4007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:25.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365343238643261323564633665663336326230373934656131353734 Jan 13 23:43:25.627000 audit: BPF prog-id=162 op=LOAD Jan 13 23:43:25.627000 audit[4007]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3987 pid=4007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:25.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365343238643261323564633665663336326230373934656131353734 Jan 13 23:43:25.699545 systemd[1]: Started cri-containerd-93642209e4428bf1be11addb51df0ac0e43d0368ec975c85a74d06cd4d98d96e.scope - libcontainer container 93642209e4428bf1be11addb51df0ac0e43d0368ec975c85a74d06cd4d98d96e. Jan 13 23:43:25.796000 audit: BPF prog-id=163 op=LOAD Jan 13 23:43:25.802000 audit: BPF prog-id=164 op=LOAD Jan 13 23:43:25.802000 audit[4067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4056 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:25.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933363432323039653434323862663162653131616464623531646630 Jan 13 23:43:25.802000 audit: BPF prog-id=164 op=UNLOAD Jan 13 23:43:25.802000 audit[4067]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4056 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:25.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933363432323039653434323862663162653131616464623531646630 Jan 13 23:43:25.803000 audit: BPF prog-id=165 op=LOAD Jan 13 23:43:25.803000 audit[4067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4056 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:25.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933363432323039653434323862663162653131616464623531646630 Jan 13 23:43:25.804000 audit: BPF prog-id=166 op=LOAD Jan 13 23:43:25.804000 audit[4067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4056 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:25.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933363432323039653434323862663162653131616464623531646630 Jan 13 23:43:25.805000 audit: BPF prog-id=166 op=UNLOAD Jan 13 23:43:25.805000 audit[4067]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4056 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:25.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933363432323039653434323862663162653131616464623531646630 Jan 13 23:43:25.805000 audit: BPF prog-id=165 op=UNLOAD Jan 13 23:43:25.805000 audit[4067]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4056 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:25.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933363432323039653434323862663162653131616464623531646630 Jan 13 23:43:25.805000 audit: BPF prog-id=167 op=LOAD Jan 13 23:43:25.805000 audit[4067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4056 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:25.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933363432323039653434323862663162653131616464623531646630 Jan 13 23:43:25.841861 containerd[1990]: time="2026-01-13T23:43:25.841462290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-698f88cd94-mvtxs,Uid:dfe68ce6-1980-4e25-a1fd-861e32fb1d7a,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce428d2a25dc6ef362b0794ea1574c349f8eec1ed0790a8a64dacbcd0fb3bb8c\"" Jan 13 23:43:25.852085 containerd[1990]: time="2026-01-13T23:43:25.851762310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 13 23:43:25.871331 containerd[1990]: time="2026-01-13T23:43:25.871079358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sntjv,Uid:43cf10fa-495e-413a-b74d-6f6834d495e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"93642209e4428bf1be11addb51df0ac0e43d0368ec975c85a74d06cd4d98d96e\"" Jan 13 23:43:26.052000 audit[4100]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4100 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:26.054632 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 13 23:43:26.054697 kernel: audit: type=1325 audit(1768347806.052:563): table=filter:119 family=2 entries=22 op=nft_register_rule pid=4100 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:26.052000 audit[4100]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe84466e0 a2=0 a3=1 items=0 ppid=3666 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:26.064639 kernel: audit: type=1300 audit(1768347806.052:563): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe84466e0 a2=0 a3=1 items=0 ppid=3666 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:26.064752 kernel: audit: type=1327 audit(1768347806.052:563): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:26.052000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:26.058000 audit[4100]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4100 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:26.071101 kernel: audit: type=1325 audit(1768347806.058:564): table=nat:120 family=2 entries=12 op=nft_register_rule pid=4100 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:26.071752 kernel: audit: type=1300 audit(1768347806.058:564): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe84466e0 a2=0 a3=1 items=0 ppid=3666 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:26.058000 audit[4100]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe84466e0 a2=0 a3=1 items=0 ppid=3666 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:26.058000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:26.081405 kernel: audit: type=1327 audit(1768347806.058:564): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:27.105301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157050610.mount: Deactivated successfully. Jan 13 23:43:27.597265 kubelet[3562]: E0113 23:43:27.596614 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:43:27.849269 containerd[1990]: time="2026-01-13T23:43:27.848860628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:43:27.851451 containerd[1990]: time="2026-01-13T23:43:27.851352140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 13 23:43:27.853204 containerd[1990]: time="2026-01-13T23:43:27.853114016Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:43:27.858233 containerd[1990]: time="2026-01-13T23:43:27.858117860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:43:27.859697 containerd[1990]: time="2026-01-13T23:43:27.859649276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.00782591s" Jan 13 23:43:27.859987 containerd[1990]: time="2026-01-13T23:43:27.859847384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 13 23:43:27.863501 containerd[1990]: time="2026-01-13T23:43:27.863315816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 13 23:43:27.894089 containerd[1990]: time="2026-01-13T23:43:27.893302700Z" level=info msg="CreateContainer within sandbox \"ce428d2a25dc6ef362b0794ea1574c349f8eec1ed0790a8a64dacbcd0fb3bb8c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 23:43:27.914159 containerd[1990]: time="2026-01-13T23:43:27.911055668Z" level=info msg="Container bd4085ba4fa37f2460ad7316f3db159a46a05469c28367a2e23e8d9a7d8a6c41: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:43:27.931029 containerd[1990]: time="2026-01-13T23:43:27.930851576Z" level=info msg="CreateContainer within sandbox \"ce428d2a25dc6ef362b0794ea1574c349f8eec1ed0790a8a64dacbcd0fb3bb8c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bd4085ba4fa37f2460ad7316f3db159a46a05469c28367a2e23e8d9a7d8a6c41\"" Jan 13 23:43:27.932327 containerd[1990]: time="2026-01-13T23:43:27.931816412Z" level=info msg="StartContainer for \"bd4085ba4fa37f2460ad7316f3db159a46a05469c28367a2e23e8d9a7d8a6c41\"" Jan 13 23:43:27.941187 containerd[1990]: time="2026-01-13T23:43:27.939867452Z" level=info msg="connecting to shim bd4085ba4fa37f2460ad7316f3db159a46a05469c28367a2e23e8d9a7d8a6c41" address="unix:///run/containerd/s/fe7f961e5d9dcfcc2abc4ea6a8bd26c9b3218d193656cb48e8c2463ffe90d3e7" protocol=ttrpc version=3 Jan 13 23:43:27.998486 systemd[1]: Started cri-containerd-bd4085ba4fa37f2460ad7316f3db159a46a05469c28367a2e23e8d9a7d8a6c41.scope - libcontainer container bd4085ba4fa37f2460ad7316f3db159a46a05469c28367a2e23e8d9a7d8a6c41. Jan 13 23:43:28.030000 audit: BPF prog-id=168 op=LOAD Jan 13 23:43:28.032000 audit: BPF prog-id=169 op=LOAD Jan 13 23:43:28.035272 kernel: audit: type=1334 audit(1768347808.030:565): prog-id=168 op=LOAD Jan 13 23:43:28.035404 kernel: audit: type=1334 audit(1768347808.032:566): prog-id=169 op=LOAD Jan 13 23:43:28.035450 kernel: audit: type=1300 audit(1768347808.032:566): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3987 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:28.032000 audit[4113]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3987 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:28.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264343038356261346661333766323436306164373331366633646231 Jan 13 23:43:28.047477 kernel: audit: type=1327 audit(1768347808.032:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264343038356261346661333766323436306164373331366633646231 Jan 13 23:43:28.032000 audit: BPF prog-id=169 op=UNLOAD Jan 13 23:43:28.032000 audit[4113]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3987 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:28.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264343038356261346661333766323436306164373331366633646231 Jan 13 23:43:28.032000 audit: BPF prog-id=170 op=LOAD Jan 13 23:43:28.032000 audit[4113]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3987 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:28.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264343038356261346661333766323436306164373331366633646231 Jan 13 23:43:28.040000 audit: BPF prog-id=171 op=LOAD Jan 13 23:43:28.040000 audit[4113]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3987 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:28.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264343038356261346661333766323436306164373331366633646231 Jan 13 23:43:28.040000 audit: BPF prog-id=171 op=UNLOAD Jan 13 23:43:28.040000 audit[4113]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3987 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:28.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264343038356261346661333766323436306164373331366633646231 Jan 13 23:43:28.040000 audit: BPF prog-id=170 op=UNLOAD Jan 13 23:43:28.040000 audit[4113]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3987 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:28.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264343038356261346661333766323436306164373331366633646231 Jan 13 23:43:28.040000 audit: BPF prog-id=172 op=LOAD Jan 13 23:43:28.040000 audit[4113]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3987 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:28.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264343038356261346661333766323436306164373331366633646231 Jan 13 23:43:28.109441 containerd[1990]: time="2026-01-13T23:43:28.108030305Z" level=info msg="StartContainer for \"bd4085ba4fa37f2460ad7316f3db159a46a05469c28367a2e23e8d9a7d8a6c41\" returns successfully" Jan 13 23:43:28.847988 kubelet[3562]: E0113 23:43:28.847939 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.848586 kubelet[3562]: W0113 23:43:28.848000 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.848586 kubelet[3562]: E0113 23:43:28.848035 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.848586 kubelet[3562]: E0113 23:43:28.848500 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.848586 kubelet[3562]: W0113 23:43:28.848522 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.848797 kubelet[3562]: E0113 23:43:28.848618 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.849082 kubelet[3562]: E0113 23:43:28.849048 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.849198 kubelet[3562]: W0113 23:43:28.849078 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.849198 kubelet[3562]: E0113 23:43:28.849176 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.849598 kubelet[3562]: E0113 23:43:28.849569 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.849673 kubelet[3562]: W0113 23:43:28.849596 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.849673 kubelet[3562]: E0113 23:43:28.849640 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.850109 kubelet[3562]: E0113 23:43:28.850073 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.850232 kubelet[3562]: W0113 23:43:28.850105 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.850232 kubelet[3562]: E0113 23:43:28.850176 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.850702 kubelet[3562]: E0113 23:43:28.850657 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.850702 kubelet[3562]: W0113 23:43:28.850696 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.851029 kubelet[3562]: E0113 23:43:28.850727 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.851360 kubelet[3562]: E0113 23:43:28.851272 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.851360 kubelet[3562]: W0113 23:43:28.851309 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.851517 kubelet[3562]: E0113 23:43:28.851367 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.851914 kubelet[3562]: E0113 23:43:28.851832 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.851914 kubelet[3562]: W0113 23:43:28.851896 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.852079 kubelet[3562]: E0113 23:43:28.851926 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.852564 kubelet[3562]: E0113 23:43:28.852501 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.852564 kubelet[3562]: W0113 23:43:28.852544 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.852721 kubelet[3562]: E0113 23:43:28.852578 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.853014 kubelet[3562]: E0113 23:43:28.852969 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.853014 kubelet[3562]: W0113 23:43:28.853001 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.853189 kubelet[3562]: E0113 23:43:28.853030 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.853571 kubelet[3562]: E0113 23:43:28.853467 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.853571 kubelet[3562]: W0113 23:43:28.853556 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.853755 kubelet[3562]: E0113 23:43:28.853592 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.854929 kubelet[3562]: E0113 23:43:28.854031 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.854929 kubelet[3562]: W0113 23:43:28.854076 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.854929 kubelet[3562]: E0113 23:43:28.854108 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.855825 kubelet[3562]: E0113 23:43:28.855672 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.857365 kubelet[3562]: W0113 23:43:28.855802 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.857365 kubelet[3562]: E0113 23:43:28.856216 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.858008 kubelet[3562]: E0113 23:43:28.857948 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.858008 kubelet[3562]: W0113 23:43:28.857992 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.858219 kubelet[3562]: E0113 23:43:28.858026 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.860376 kubelet[3562]: E0113 23:43:28.860307 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.860376 kubelet[3562]: W0113 23:43:28.860356 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.860558 kubelet[3562]: E0113 23:43:28.860392 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.885675 kubelet[3562]: I0113 23:43:28.885579 3562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-698f88cd94-mvtxs" podStartSLOduration=2.872213107 podStartE2EDuration="4.885554625s" podCreationTimestamp="2026-01-13 23:43:24 +0000 UTC" firstStartedPulling="2026-01-13 23:43:25.848281554 +0000 UTC m=+37.541024539" lastFinishedPulling="2026-01-13 23:43:27.861622976 +0000 UTC m=+39.554366057" observedRunningTime="2026-01-13 23:43:28.860652081 +0000 UTC m=+40.553395078" watchObservedRunningTime="2026-01-13 23:43:28.885554625 +0000 UTC m=+40.578297622" Jan 13 23:43:28.908520 kubelet[3562]: E0113 23:43:28.908310 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.908691 kubelet[3562]: W0113 23:43:28.908510 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.908748 kubelet[3562]: E0113 23:43:28.908691 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.909772 kubelet[3562]: E0113 23:43:28.909719 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.909772 kubelet[3562]: W0113 23:43:28.909760 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.910376 kubelet[3562]: E0113 23:43:28.909804 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.910844 kubelet[3562]: E0113 23:43:28.910789 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.910844 kubelet[3562]: W0113 23:43:28.910834 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.910999 kubelet[3562]: E0113 23:43:28.910924 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.911909 kubelet[3562]: E0113 23:43:28.911856 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.911909 kubelet[3562]: W0113 23:43:28.911896 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.912535 kubelet[3562]: E0113 23:43:28.911945 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.912912 kubelet[3562]: E0113 23:43:28.912805 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.912912 kubelet[3562]: W0113 23:43:28.912833 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.913848 kubelet[3562]: E0113 23:43:28.913553 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.914450 kubelet[3562]: E0113 23:43:28.914392 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.914450 kubelet[3562]: W0113 23:43:28.914431 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.914713 kubelet[3562]: E0113 23:43:28.914664 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.915464 kubelet[3562]: E0113 23:43:28.915412 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.915464 kubelet[3562]: W0113 23:43:28.915452 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.915784 kubelet[3562]: E0113 23:43:28.915684 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.916119 kubelet[3562]: E0113 23:43:28.916061 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.916119 kubelet[3562]: W0113 23:43:28.916102 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.916336 kubelet[3562]: E0113 23:43:28.916245 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.916624 kubelet[3562]: E0113 23:43:28.916575 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.916624 kubelet[3562]: W0113 23:43:28.916611 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.916936 kubelet[3562]: E0113 23:43:28.916858 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.917365 kubelet[3562]: E0113 23:43:28.917325 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.917365 kubelet[3562]: W0113 23:43:28.917358 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.917538 kubelet[3562]: E0113 23:43:28.917407 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.917816 kubelet[3562]: E0113 23:43:28.917772 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.917816 kubelet[3562]: W0113 23:43:28.917807 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.917931 kubelet[3562]: E0113 23:43:28.917837 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.918893 kubelet[3562]: E0113 23:43:28.918833 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.918893 kubelet[3562]: W0113 23:43:28.918874 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.920242 kubelet[3562]: E0113 23:43:28.918907 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.920532 kubelet[3562]: E0113 23:43:28.920502 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.920656 kubelet[3562]: W0113 23:43:28.920631 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.921219 kubelet[3562]: E0113 23:43:28.920837 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.921748 kubelet[3562]: E0113 23:43:28.921718 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.921878 kubelet[3562]: W0113 23:43:28.921852 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.922108 kubelet[3562]: E0113 23:43:28.922082 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.922655 kubelet[3562]: E0113 23:43:28.922591 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.922655 kubelet[3562]: W0113 23:43:28.922621 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.922989 kubelet[3562]: E0113 23:43:28.922930 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.923463 kubelet[3562]: E0113 23:43:28.923425 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.923730 kubelet[3562]: W0113 23:43:28.923606 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.923730 kubelet[3562]: E0113 23:43:28.923667 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.924952 kubelet[3562]: E0113 23:43:28.924359 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.924952 kubelet[3562]: W0113 23:43:28.924411 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.924952 kubelet[3562]: E0113 23:43:28.924443 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.925653 kubelet[3562]: E0113 23:43:28.925624 3562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:43:28.925781 kubelet[3562]: W0113 23:43:28.925755 3562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:43:28.925912 kubelet[3562]: E0113 23:43:28.925888 3562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:43:28.934000 audit[4189]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=4189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:28.934000 audit[4189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcb4c0900 a2=0 a3=1 items=0 ppid=3666 pid=4189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:28.934000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:28.939000 audit[4189]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=4189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:28.939000 audit[4189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffcb4c0900 a2=0 a3=1 items=0 ppid=3666 pid=4189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:28.939000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:29.121739 containerd[1990]: time="2026-01-13T23:43:29.121577502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:43:29.124531 containerd[1990]: time="2026-01-13T23:43:29.124382322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:29.127040 containerd[1990]: time="2026-01-13T23:43:29.126969318Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:43:29.132642 containerd[1990]: time="2026-01-13T23:43:29.132558510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:43:29.134620 containerd[1990]: time="2026-01-13T23:43:29.134415954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.270760334s" Jan 13 23:43:29.134620 containerd[1990]: time="2026-01-13T23:43:29.134475486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 13 23:43:29.140556 containerd[1990]: time="2026-01-13T23:43:29.139550874Z" level=info msg="CreateContainer within sandbox \"93642209e4428bf1be11addb51df0ac0e43d0368ec975c85a74d06cd4d98d96e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 23:43:29.160575 containerd[1990]: time="2026-01-13T23:43:29.160496286Z" level=info msg="Container da7b99b76b97120dd3288cf54b4379edc663cec9e8954cb850fba692295fd0e3: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:43:29.186047 containerd[1990]: time="2026-01-13T23:43:29.185804971Z" level=info msg="CreateContainer within sandbox \"93642209e4428bf1be11addb51df0ac0e43d0368ec975c85a74d06cd4d98d96e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"da7b99b76b97120dd3288cf54b4379edc663cec9e8954cb850fba692295fd0e3\"" Jan 13 23:43:29.188074 containerd[1990]: time="2026-01-13T23:43:29.187155031Z" level=info msg="StartContainer for \"da7b99b76b97120dd3288cf54b4379edc663cec9e8954cb850fba692295fd0e3\"" Jan 13 23:43:29.193715 containerd[1990]: time="2026-01-13T23:43:29.193571791Z" level=info msg="connecting to shim da7b99b76b97120dd3288cf54b4379edc663cec9e8954cb850fba692295fd0e3" address="unix:///run/containerd/s/657bb21d44d31fa18af87af1acd163a11d9d50304bb10b0a10b5a59560a67cb1" protocol=ttrpc version=3 Jan 13 23:43:29.237519 systemd[1]: Started cri-containerd-da7b99b76b97120dd3288cf54b4379edc663cec9e8954cb850fba692295fd0e3.scope - libcontainer container da7b99b76b97120dd3288cf54b4379edc663cec9e8954cb850fba692295fd0e3. Jan 13 23:43:29.317000 audit: BPF prog-id=173 op=LOAD Jan 13 23:43:29.317000 audit[4194]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4056 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:29.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461376239396237366239373132306464333238386366353462343337 Jan 13 23:43:29.317000 audit: BPF prog-id=174 op=LOAD Jan 13 23:43:29.317000 audit[4194]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4056 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:29.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461376239396237366239373132306464333238386366353462343337 Jan 13 23:43:29.317000 audit: BPF prog-id=174 op=UNLOAD Jan 13 23:43:29.317000 audit[4194]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4056 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:29.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461376239396237366239373132306464333238386366353462343337 Jan 13 23:43:29.317000 audit: BPF prog-id=173 op=UNLOAD Jan 13 23:43:29.317000 audit[4194]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4056 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:29.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461376239396237366239373132306464333238386366353462343337 Jan 13 23:43:29.317000 audit: BPF prog-id=175 op=LOAD Jan 13 23:43:29.317000 audit[4194]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4056 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:29.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461376239396237366239373132306464333238386366353462343337 Jan 13 23:43:29.361668 containerd[1990]: time="2026-01-13T23:43:29.361582723Z" level=info msg="StartContainer for \"da7b99b76b97120dd3288cf54b4379edc663cec9e8954cb850fba692295fd0e3\" returns successfully" Jan 13 23:43:29.409579 systemd[1]: cri-containerd-da7b99b76b97120dd3288cf54b4379edc663cec9e8954cb850fba692295fd0e3.scope: Deactivated successfully. Jan 13 23:43:29.413000 audit: BPF prog-id=175 op=UNLOAD Jan 13 23:43:29.415626 containerd[1990]: time="2026-01-13T23:43:29.415546916Z" level=info msg="received container exit event container_id:\"da7b99b76b97120dd3288cf54b4379edc663cec9e8954cb850fba692295fd0e3\" id:\"da7b99b76b97120dd3288cf54b4379edc663cec9e8954cb850fba692295fd0e3\" pid:4207 exited_at:{seconds:1768347809 nanos:413386616}" Jan 13 23:43:29.460492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da7b99b76b97120dd3288cf54b4379edc663cec9e8954cb850fba692295fd0e3-rootfs.mount: Deactivated successfully. Jan 13 23:43:29.596283 kubelet[3562]: E0113 23:43:29.596202 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:43:29.853599 containerd[1990]: time="2026-01-13T23:43:29.853397662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 13 23:43:31.596705 kubelet[3562]: E0113 23:43:31.596611 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:43:32.722228 containerd[1990]: time="2026-01-13T23:43:32.721245084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:43:32.724214 containerd[1990]: time="2026-01-13T23:43:32.724105944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 13 23:43:32.725176 containerd[1990]: time="2026-01-13T23:43:32.725101464Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:43:32.730173 containerd[1990]: time="2026-01-13T23:43:32.730085652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:43:32.732536 containerd[1990]: time="2026-01-13T23:43:32.732490728Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.87899691s" Jan 13 23:43:32.733449 containerd[1990]: time="2026-01-13T23:43:32.733204476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 13 23:43:32.738001 containerd[1990]: time="2026-01-13T23:43:32.737647296Z" level=info msg="CreateContainer within sandbox \"93642209e4428bf1be11addb51df0ac0e43d0368ec975c85a74d06cd4d98d96e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 23:43:32.752587 containerd[1990]: time="2026-01-13T23:43:32.752516268Z" level=info msg="Container 3966a890ac6329acc3ae76a5b1d8dfb9267c98ba4812b38d137c1f39e6e7a297: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:43:32.769617 containerd[1990]: time="2026-01-13T23:43:32.769497024Z" level=info msg="CreateContainer within sandbox \"93642209e4428bf1be11addb51df0ac0e43d0368ec975c85a74d06cd4d98d96e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3966a890ac6329acc3ae76a5b1d8dfb9267c98ba4812b38d137c1f39e6e7a297\"" Jan 13 23:43:32.772318 containerd[1990]: time="2026-01-13T23:43:32.771103356Z" level=info msg="StartContainer for \"3966a890ac6329acc3ae76a5b1d8dfb9267c98ba4812b38d137c1f39e6e7a297\"" Jan 13 23:43:32.777680 containerd[1990]: time="2026-01-13T23:43:32.777628188Z" level=info msg="connecting to shim 3966a890ac6329acc3ae76a5b1d8dfb9267c98ba4812b38d137c1f39e6e7a297" address="unix:///run/containerd/s/657bb21d44d31fa18af87af1acd163a11d9d50304bb10b0a10b5a59560a67cb1" protocol=ttrpc version=3 Jan 13 23:43:32.821518 systemd[1]: Started cri-containerd-3966a890ac6329acc3ae76a5b1d8dfb9267c98ba4812b38d137c1f39e6e7a297.scope - libcontainer container 3966a890ac6329acc3ae76a5b1d8dfb9267c98ba4812b38d137c1f39e6e7a297. Jan 13 23:43:32.913000 audit: BPF prog-id=176 op=LOAD Jan 13 23:43:32.915573 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 13 23:43:32.915685 kernel: audit: type=1334 audit(1768347812.913:581): prog-id=176 op=LOAD Jan 13 23:43:32.913000 audit[4256]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4056 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:32.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339363661383930616336333239616363336165373661356231643864 Jan 13 23:43:32.928941 kernel: audit: type=1300 audit(1768347812.913:581): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4056 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:32.929054 kernel: audit: type=1327 audit(1768347812.913:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339363661383930616336333239616363336165373661356231643864 Jan 13 23:43:32.914000 audit: BPF prog-id=177 op=LOAD Jan 13 23:43:32.930872 kernel: audit: type=1334 audit(1768347812.914:582): prog-id=177 op=LOAD Jan 13 23:43:32.931157 kernel: audit: type=1300 audit(1768347812.914:582): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4056 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:32.914000 audit[4256]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4056 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:32.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339363661383930616336333239616363336165373661356231643864 Jan 13 23:43:32.943190 kernel: audit: type=1327 audit(1768347812.914:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339363661383930616336333239616363336165373661356231643864 Jan 13 23:43:32.916000 audit: BPF prog-id=177 op=UNLOAD Jan 13 23:43:32.945238 kernel: audit: type=1334 audit(1768347812.916:583): prog-id=177 op=UNLOAD Jan 13 23:43:32.916000 audit[4256]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4056 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:32.951276 kernel: audit: type=1300 audit(1768347812.916:583): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4056 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:32.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339363661383930616336333239616363336165373661356231643864 Jan 13 23:43:32.957704 kernel: audit: type=1327 audit(1768347812.916:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339363661383930616336333239616363336165373661356231643864 Jan 13 23:43:32.916000 audit: BPF prog-id=176 op=UNLOAD Jan 13 23:43:32.959643 kernel: audit: type=1334 audit(1768347812.916:584): prog-id=176 op=UNLOAD Jan 13 23:43:32.916000 audit[4256]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4056 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:32.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339363661383930616336333239616363336165373661356231643864 Jan 13 23:43:32.916000 audit: BPF prog-id=178 op=LOAD Jan 13 23:43:32.916000 audit[4256]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=4056 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:32.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339363661383930616336333239616363336165373661356231643864 Jan 13 23:43:32.998906 containerd[1990]: time="2026-01-13T23:43:32.998772338Z" level=info msg="StartContainer for \"3966a890ac6329acc3ae76a5b1d8dfb9267c98ba4812b38d137c1f39e6e7a297\" returns successfully" Jan 13 23:43:33.597007 kubelet[3562]: E0113 23:43:33.596485 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:43:34.437150 containerd[1990]: time="2026-01-13T23:43:34.437053297Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 23:43:34.441775 systemd[1]: cri-containerd-3966a890ac6329acc3ae76a5b1d8dfb9267c98ba4812b38d137c1f39e6e7a297.scope: Deactivated successfully. Jan 13 23:43:34.442810 systemd[1]: cri-containerd-3966a890ac6329acc3ae76a5b1d8dfb9267c98ba4812b38d137c1f39e6e7a297.scope: Consumed 989ms CPU time, 185.4M memory peak, 165.9M written to disk. Jan 13 23:43:34.445000 audit: BPF prog-id=178 op=UNLOAD Jan 13 23:43:34.448760 containerd[1990]: time="2026-01-13T23:43:34.448658389Z" level=info msg="received container exit event container_id:\"3966a890ac6329acc3ae76a5b1d8dfb9267c98ba4812b38d137c1f39e6e7a297\" id:\"3966a890ac6329acc3ae76a5b1d8dfb9267c98ba4812b38d137c1f39e6e7a297\" pid:4269 exited_at:{seconds:1768347814 nanos:448195069}" Jan 13 23:43:34.471686 kubelet[3562]: I0113 23:43:34.471589 3562 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 13 23:43:34.523431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3966a890ac6329acc3ae76a5b1d8dfb9267c98ba4812b38d137c1f39e6e7a297-rootfs.mount: Deactivated successfully. Jan 13 23:43:34.591611 systemd[1]: Created slice kubepods-burstable-podffeb255f_8b45_4b39_ab1a_757accecb002.slice - libcontainer container kubepods-burstable-podffeb255f_8b45_4b39_ab1a_757accecb002.slice. Jan 13 23:43:34.627531 systemd[1]: Created slice kubepods-besteffort-pod3317981a_15b4_41f8_a3cf_26fbd9c6fbf1.slice - libcontainer container kubepods-besteffort-pod3317981a_15b4_41f8_a3cf_26fbd9c6fbf1.slice. Jan 13 23:43:34.654532 systemd[1]: Created slice kubepods-burstable-pod866a6379_067c_4a18_817c_a6d7c19adba8.slice - libcontainer container kubepods-burstable-pod866a6379_067c_4a18_817c_a6d7c19adba8.slice. Jan 13 23:43:34.668751 kubelet[3562]: I0113 23:43:34.668679 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jntm8\" (UniqueName: \"kubernetes.io/projected/ffeb255f-8b45-4b39-ab1a-757accecb002-kube-api-access-jntm8\") pod \"coredns-668d6bf9bc-pkh58\" (UID: \"ffeb255f-8b45-4b39-ab1a-757accecb002\") " pod="kube-system/coredns-668d6bf9bc-pkh58" Jan 13 23:43:34.668751 kubelet[3562]: I0113 23:43:34.668757 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8956cacd-e75c-4e27-9bd0-1ec2f2c552be-whisker-backend-key-pair\") pod \"whisker-7668ccdc96-n889l\" (UID: \"8956cacd-e75c-4e27-9bd0-1ec2f2c552be\") " pod="calico-system/whisker-7668ccdc96-n889l" Jan 13 23:43:34.670066 kubelet[3562]: I0113 23:43:34.668809 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8956cacd-e75c-4e27-9bd0-1ec2f2c552be-whisker-ca-bundle\") pod \"whisker-7668ccdc96-n889l\" (UID: \"8956cacd-e75c-4e27-9bd0-1ec2f2c552be\") " pod="calico-system/whisker-7668ccdc96-n889l" Jan 13 23:43:34.670066 kubelet[3562]: I0113 23:43:34.668856 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hkjq\" (UniqueName: \"kubernetes.io/projected/866a6379-067c-4a18-817c-a6d7c19adba8-kube-api-access-6hkjq\") pod \"coredns-668d6bf9bc-rggqk\" (UID: \"866a6379-067c-4a18-817c-a6d7c19adba8\") " pod="kube-system/coredns-668d6bf9bc-rggqk" Jan 13 23:43:34.670066 kubelet[3562]: I0113 23:43:34.668894 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d529d459-4c8c-4f5e-b8a4-f53690574272-calico-apiserver-certs\") pod \"calico-apiserver-6bc5d5895-g9wvb\" (UID: \"d529d459-4c8c-4f5e-b8a4-f53690574272\") " pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" Jan 13 23:43:34.670066 kubelet[3562]: I0113 23:43:34.668936 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7796067b-5cab-42e9-af9d-320bb4208060-calico-apiserver-certs\") pod \"calico-apiserver-6bc5d5895-dvsmb\" (UID: \"7796067b-5cab-42e9-af9d-320bb4208060\") " pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" Jan 13 23:43:34.670066 kubelet[3562]: I0113 23:43:34.668985 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77d2v\" (UniqueName: \"kubernetes.io/projected/7796067b-5cab-42e9-af9d-320bb4208060-kube-api-access-77d2v\") pod \"calico-apiserver-6bc5d5895-dvsmb\" (UID: \"7796067b-5cab-42e9-af9d-320bb4208060\") " pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" Jan 13 23:43:34.672982 kubelet[3562]: I0113 23:43:34.669024 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffeb255f-8b45-4b39-ab1a-757accecb002-config-volume\") pod \"coredns-668d6bf9bc-pkh58\" (UID: \"ffeb255f-8b45-4b39-ab1a-757accecb002\") " pod="kube-system/coredns-668d6bf9bc-pkh58" Jan 13 23:43:34.672982 kubelet[3562]: I0113 23:43:34.669069 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh8d6\" (UniqueName: \"kubernetes.io/projected/8956cacd-e75c-4e27-9bd0-1ec2f2c552be-kube-api-access-sh8d6\") pod \"whisker-7668ccdc96-n889l\" (UID: \"8956cacd-e75c-4e27-9bd0-1ec2f2c552be\") " pod="calico-system/whisker-7668ccdc96-n889l" Jan 13 23:43:34.672982 kubelet[3562]: I0113 23:43:34.670694 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3317981a-15b4-41f8-a3cf-26fbd9c6fbf1-tigera-ca-bundle\") pod \"calico-kube-controllers-7b8d4f74f9-kqpzz\" (UID: \"3317981a-15b4-41f8-a3cf-26fbd9c6fbf1\") " pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" Jan 13 23:43:34.672982 kubelet[3562]: I0113 23:43:34.670796 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csplk\" (UniqueName: \"kubernetes.io/projected/3317981a-15b4-41f8-a3cf-26fbd9c6fbf1-kube-api-access-csplk\") pod \"calico-kube-controllers-7b8d4f74f9-kqpzz\" (UID: \"3317981a-15b4-41f8-a3cf-26fbd9c6fbf1\") " pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" Jan 13 23:43:34.672982 kubelet[3562]: I0113 23:43:34.670843 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dgwh\" (UniqueName: \"kubernetes.io/projected/d529d459-4c8c-4f5e-b8a4-f53690574272-kube-api-access-2dgwh\") pod \"calico-apiserver-6bc5d5895-g9wvb\" (UID: \"d529d459-4c8c-4f5e-b8a4-f53690574272\") " pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" Jan 13 23:43:34.678545 kubelet[3562]: I0113 23:43:34.670882 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/866a6379-067c-4a18-817c-a6d7c19adba8-config-volume\") pod \"coredns-668d6bf9bc-rggqk\" (UID: \"866a6379-067c-4a18-817c-a6d7c19adba8\") " pod="kube-system/coredns-668d6bf9bc-rggqk" Jan 13 23:43:34.676790 systemd[1]: Created slice kubepods-besteffort-podd529d459_4c8c_4f5e_b8a4_f53690574272.slice - libcontainer container kubepods-besteffort-podd529d459_4c8c_4f5e_b8a4_f53690574272.slice. Jan 13 23:43:34.703264 systemd[1]: Created slice kubepods-besteffort-pod7796067b_5cab_42e9_af9d_320bb4208060.slice - libcontainer container kubepods-besteffort-pod7796067b_5cab_42e9_af9d_320bb4208060.slice. Jan 13 23:43:34.735251 systemd[1]: Created slice kubepods-besteffort-pod8956cacd_e75c_4e27_9bd0_1ec2f2c552be.slice - libcontainer container kubepods-besteffort-pod8956cacd_e75c_4e27_9bd0_1ec2f2c552be.slice. Jan 13 23:43:34.763277 systemd[1]: Created slice kubepods-besteffort-pod86622233_a85f_41fd_b458_2112644e82b9.slice - libcontainer container kubepods-besteffort-pod86622233_a85f_41fd_b458_2112644e82b9.slice. Jan 13 23:43:34.772579 kubelet[3562]: I0113 23:43:34.772517 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/db70aac6-82d4-4ef8-98ae-1ad4091dd76e-calico-apiserver-certs\") pod \"calico-apiserver-7c9dddc9f7-sd8kc\" (UID: \"db70aac6-82d4-4ef8-98ae-1ad4091dd76e\") " pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" Jan 13 23:43:34.772947 kubelet[3562]: I0113 23:43:34.772894 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxwhx\" (UniqueName: \"kubernetes.io/projected/db70aac6-82d4-4ef8-98ae-1ad4091dd76e-kube-api-access-dxwhx\") pod \"calico-apiserver-7c9dddc9f7-sd8kc\" (UID: \"db70aac6-82d4-4ef8-98ae-1ad4091dd76e\") " pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" Jan 13 23:43:34.773684 kubelet[3562]: I0113 23:43:34.773604 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmwts\" (UniqueName: \"kubernetes.io/projected/86622233-a85f-41fd-b458-2112644e82b9-kube-api-access-hmwts\") pod \"goldmane-666569f655-zwrvg\" (UID: \"86622233-a85f-41fd-b458-2112644e82b9\") " pod="calico-system/goldmane-666569f655-zwrvg" Jan 13 23:43:34.774045 kubelet[3562]: I0113 23:43:34.773988 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86622233-a85f-41fd-b458-2112644e82b9-goldmane-ca-bundle\") pod \"goldmane-666569f655-zwrvg\" (UID: \"86622233-a85f-41fd-b458-2112644e82b9\") " pod="calico-system/goldmane-666569f655-zwrvg" Jan 13 23:43:34.774417 kubelet[3562]: I0113 23:43:34.774366 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86622233-a85f-41fd-b458-2112644e82b9-config\") pod \"goldmane-666569f655-zwrvg\" (UID: \"86622233-a85f-41fd-b458-2112644e82b9\") " pod="calico-system/goldmane-666569f655-zwrvg" Jan 13 23:43:34.776898 kubelet[3562]: I0113 23:43:34.776827 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/86622233-a85f-41fd-b458-2112644e82b9-goldmane-key-pair\") pod \"goldmane-666569f655-zwrvg\" (UID: \"86622233-a85f-41fd-b458-2112644e82b9\") " pod="calico-system/goldmane-666569f655-zwrvg" Jan 13 23:43:34.850563 systemd[1]: Created slice kubepods-besteffort-poddb70aac6_82d4_4ef8_98ae_1ad4091dd76e.slice - libcontainer container kubepods-besteffort-poddb70aac6_82d4_4ef8_98ae_1ad4091dd76e.slice. Jan 13 23:43:34.941067 containerd[1990]: time="2026-01-13T23:43:34.940977051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b8d4f74f9-kqpzz,Uid:3317981a-15b4-41f8-a3cf-26fbd9c6fbf1,Namespace:calico-system,Attempt:0,}" Jan 13 23:43:34.983460 containerd[1990]: time="2026-01-13T23:43:34.982554735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rggqk,Uid:866a6379-067c-4a18-817c-a6d7c19adba8,Namespace:kube-system,Attempt:0,}" Jan 13 23:43:34.992106 containerd[1990]: time="2026-01-13T23:43:34.991998675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc5d5895-g9wvb,Uid:d529d459-4c8c-4f5e-b8a4-f53690574272,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:43:35.019750 containerd[1990]: time="2026-01-13T23:43:35.019689816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc5d5895-dvsmb,Uid:7796067b-5cab-42e9-af9d-320bb4208060,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:43:35.049194 containerd[1990]: time="2026-01-13T23:43:35.049063116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7668ccdc96-n889l,Uid:8956cacd-e75c-4e27-9bd0-1ec2f2c552be,Namespace:calico-system,Attempt:0,}" Jan 13 23:43:35.095691 containerd[1990]: time="2026-01-13T23:43:35.095618316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zwrvg,Uid:86622233-a85f-41fd-b458-2112644e82b9,Namespace:calico-system,Attempt:0,}" Jan 13 23:43:35.192154 containerd[1990]: time="2026-01-13T23:43:35.191753352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9dddc9f7-sd8kc,Uid:db70aac6-82d4-4ef8-98ae-1ad4091dd76e,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:43:35.218657 containerd[1990]: time="2026-01-13T23:43:35.218582881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pkh58,Uid:ffeb255f-8b45-4b39-ab1a-757accecb002,Namespace:kube-system,Attempt:0,}" Jan 13 23:43:35.488387 containerd[1990]: time="2026-01-13T23:43:35.488004158Z" level=error msg="Failed to destroy network for sandbox \"f858bfae26880b7aee5b158bf8d541e8a3178bd2987345fa0b432d3ccd12518c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:35.501458 containerd[1990]: time="2026-01-13T23:43:35.501262994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b8d4f74f9-kqpzz,Uid:3317981a-15b4-41f8-a3cf-26fbd9c6fbf1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f858bfae26880b7aee5b158bf8d541e8a3178bd2987345fa0b432d3ccd12518c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:35.501764 kubelet[3562]: E0113 23:43:35.501670 3562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f858bfae26880b7aee5b158bf8d541e8a3178bd2987345fa0b432d3ccd12518c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:35.501764 kubelet[3562]: E0113 23:43:35.501765 3562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f858bfae26880b7aee5b158bf8d541e8a3178bd2987345fa0b432d3ccd12518c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" Jan 13 23:43:35.501764 kubelet[3562]: E0113 23:43:35.501800 3562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f858bfae26880b7aee5b158bf8d541e8a3178bd2987345fa0b432d3ccd12518c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" Jan 13 23:43:35.502533 kubelet[3562]: E0113 23:43:35.501883 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b8d4f74f9-kqpzz_calico-system(3317981a-15b4-41f8-a3cf-26fbd9c6fbf1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b8d4f74f9-kqpzz_calico-system(3317981a-15b4-41f8-a3cf-26fbd9c6fbf1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f858bfae26880b7aee5b158bf8d541e8a3178bd2987345fa0b432d3ccd12518c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" podUID="3317981a-15b4-41f8-a3cf-26fbd9c6fbf1" Jan 13 23:43:35.612018 systemd[1]: Created slice kubepods-besteffort-pod884e12a9_b4d3_4695_bc91_5cdf1a464d0b.slice - libcontainer container kubepods-besteffort-pod884e12a9_b4d3_4695_bc91_5cdf1a464d0b.slice. Jan 13 23:43:35.624005 containerd[1990]: time="2026-01-13T23:43:35.623952591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lblk8,Uid:884e12a9-b4d3-4695-bc91-5cdf1a464d0b,Namespace:calico-system,Attempt:0,}" Jan 13 23:43:35.786999 containerd[1990]: time="2026-01-13T23:43:35.784777611Z" level=error msg="Failed to destroy network for sandbox \"fc8766403211d1ae83ac0a8dc5c71aeeccef8d75a4a6ff92f009fb42e858d867\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:35.791282 systemd[1]: run-netns-cni\x2dd9b50ecc\x2d82e9\x2d84dc\x2dec22\x2dcfbd6c90067e.mount: Deactivated successfully. Jan 13 23:43:35.794470 containerd[1990]: time="2026-01-13T23:43:35.794055603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc5d5895-dvsmb,Uid:7796067b-5cab-42e9-af9d-320bb4208060,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc8766403211d1ae83ac0a8dc5c71aeeccef8d75a4a6ff92f009fb42e858d867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:35.796629 kubelet[3562]: E0113 23:43:35.795851 3562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc8766403211d1ae83ac0a8dc5c71aeeccef8d75a4a6ff92f009fb42e858d867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:35.796629 kubelet[3562]: E0113 23:43:35.795962 3562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc8766403211d1ae83ac0a8dc5c71aeeccef8d75a4a6ff92f009fb42e858d867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" Jan 13 23:43:35.796629 kubelet[3562]: E0113 23:43:35.796000 3562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc8766403211d1ae83ac0a8dc5c71aeeccef8d75a4a6ff92f009fb42e858d867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" Jan 13 23:43:35.798499 kubelet[3562]: E0113 23:43:35.796417 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bc5d5895-dvsmb_calico-apiserver(7796067b-5cab-42e9-af9d-320bb4208060)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bc5d5895-dvsmb_calico-apiserver(7796067b-5cab-42e9-af9d-320bb4208060)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc8766403211d1ae83ac0a8dc5c71aeeccef8d75a4a6ff92f009fb42e858d867\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" podUID="7796067b-5cab-42e9-af9d-320bb4208060" Jan 13 23:43:35.863344 containerd[1990]: time="2026-01-13T23:43:35.863203660Z" level=error msg="Failed to destroy network for sandbox \"1dd72fce44c38419343e6f8956abf35b5ec55d45bb9b0803be92094a5fb4869a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:35.869781 systemd[1]: run-netns-cni\x2de985283e\x2d5d60\x2d2040\x2da0a5\x2db32ee036d56c.mount: Deactivated successfully. Jan 13 23:43:35.878333 containerd[1990]: time="2026-01-13T23:43:35.878242336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc5d5895-g9wvb,Uid:d529d459-4c8c-4f5e-b8a4-f53690574272,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd72fce44c38419343e6f8956abf35b5ec55d45bb9b0803be92094a5fb4869a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:35.878607 kubelet[3562]: E0113 23:43:35.878547 3562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd72fce44c38419343e6f8956abf35b5ec55d45bb9b0803be92094a5fb4869a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:35.878607 kubelet[3562]: E0113 23:43:35.878619 3562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd72fce44c38419343e6f8956abf35b5ec55d45bb9b0803be92094a5fb4869a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" Jan 13 23:43:35.878607 kubelet[3562]: E0113 23:43:35.878655 3562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd72fce44c38419343e6f8956abf35b5ec55d45bb9b0803be92094a5fb4869a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" Jan 13 23:43:35.881464 kubelet[3562]: E0113 23:43:35.878722 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bc5d5895-g9wvb_calico-apiserver(d529d459-4c8c-4f5e-b8a4-f53690574272)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bc5d5895-g9wvb_calico-apiserver(d529d459-4c8c-4f5e-b8a4-f53690574272)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1dd72fce44c38419343e6f8956abf35b5ec55d45bb9b0803be92094a5fb4869a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" podUID="d529d459-4c8c-4f5e-b8a4-f53690574272" Jan 13 23:43:35.900182 containerd[1990]: time="2026-01-13T23:43:35.900077704Z" level=error msg="Failed to destroy network for sandbox \"69717666151d2ac069219caf0246d7a8170f954f3c782dc25a10745db309db91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:35.907447 systemd[1]: run-netns-cni\x2dffe16d81\x2d0a20\x2d64fb\x2dc05e\x2dfe7a503e8caf.mount: Deactivated successfully. Jan 13 23:43:35.911169 containerd[1990]: time="2026-01-13T23:43:35.911060584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rggqk,Uid:866a6379-067c-4a18-817c-a6d7c19adba8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"69717666151d2ac069219caf0246d7a8170f954f3c782dc25a10745db309db91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:35.912785 kubelet[3562]: E0113 23:43:35.912701 3562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69717666151d2ac069219caf0246d7a8170f954f3c782dc25a10745db309db91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:35.912785 kubelet[3562]: E0113 23:43:35.912781 3562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69717666151d2ac069219caf0246d7a8170f954f3c782dc25a10745db309db91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rggqk" Jan 13 23:43:35.913591 kubelet[3562]: E0113 23:43:35.912813 3562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69717666151d2ac069219caf0246d7a8170f954f3c782dc25a10745db309db91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rggqk" Jan 13 23:43:35.913591 kubelet[3562]: E0113 23:43:35.912875 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rggqk_kube-system(866a6379-067c-4a18-817c-a6d7c19adba8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rggqk_kube-system(866a6379-067c-4a18-817c-a6d7c19adba8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69717666151d2ac069219caf0246d7a8170f954f3c782dc25a10745db309db91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rggqk" podUID="866a6379-067c-4a18-817c-a6d7c19adba8" Jan 13 23:43:35.923603 containerd[1990]: time="2026-01-13T23:43:35.923526088Z" level=error msg="Failed to destroy network for sandbox \"687011b6cd350ee653e4b0f0c360835c2e772e99135cc3c8a69b00ac3fc7a2c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:35.930816 containerd[1990]: time="2026-01-13T23:43:35.930722680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zwrvg,Uid:86622233-a85f-41fd-b458-2112644e82b9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"687011b6cd350ee653e4b0f0c360835c2e772e99135cc3c8a69b00ac3fc7a2c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:35.933181 kubelet[3562]: E0113 23:43:35.932992 3562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"687011b6cd350ee653e4b0f0c360835c2e772e99135cc3c8a69b00ac3fc7a2c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:35.933351 kubelet[3562]: E0113 23:43:35.933261 3562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"687011b6cd350ee653e4b0f0c360835c2e772e99135cc3c8a69b00ac3fc7a2c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-zwrvg" Jan 13 23:43:35.933446 kubelet[3562]: E0113 23:43:35.933343 3562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"687011b6cd350ee653e4b0f0c360835c2e772e99135cc3c8a69b00ac3fc7a2c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-zwrvg" Jan 13 23:43:35.935003 kubelet[3562]: E0113 23:43:35.934209 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-zwrvg_calico-system(86622233-a85f-41fd-b458-2112644e82b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-zwrvg_calico-system(86622233-a85f-41fd-b458-2112644e82b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"687011b6cd350ee653e4b0f0c360835c2e772e99135cc3c8a69b00ac3fc7a2c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-zwrvg" podUID="86622233-a85f-41fd-b458-2112644e82b9" Jan 13 23:43:35.950849 containerd[1990]: time="2026-01-13T23:43:35.950728924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 13 23:43:35.980407 containerd[1990]: time="2026-01-13T23:43:35.980340856Z" level=error msg="Failed to destroy network for sandbox \"abaf0680d02bfd57a5bb8619ac3e84778bdd6062c34c711509fefb7bf7a3cc3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:35.984407 containerd[1990]: time="2026-01-13T23:43:35.984279196Z" level=error msg="Failed to destroy network for sandbox \"1b2759a608b0a4c4a78d3ad0ff10a3e26f05baa1cba36a39369390c1f6c5634a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:36.001155 containerd[1990]: time="2026-01-13T23:43:36.000882144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pkh58,Uid:ffeb255f-8b45-4b39-ab1a-757accecb002,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b2759a608b0a4c4a78d3ad0ff10a3e26f05baa1cba36a39369390c1f6c5634a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:36.002557 kubelet[3562]: E0113 23:43:36.002454 3562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b2759a608b0a4c4a78d3ad0ff10a3e26f05baa1cba36a39369390c1f6c5634a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:36.002557 kubelet[3562]: E0113 23:43:36.002551 3562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b2759a608b0a4c4a78d3ad0ff10a3e26f05baa1cba36a39369390c1f6c5634a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pkh58" Jan 13 23:43:36.002789 kubelet[3562]: E0113 23:43:36.002589 3562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b2759a608b0a4c4a78d3ad0ff10a3e26f05baa1cba36a39369390c1f6c5634a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pkh58" Jan 13 23:43:36.002789 kubelet[3562]: E0113 23:43:36.002648 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pkh58_kube-system(ffeb255f-8b45-4b39-ab1a-757accecb002)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pkh58_kube-system(ffeb255f-8b45-4b39-ab1a-757accecb002)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b2759a608b0a4c4a78d3ad0ff10a3e26f05baa1cba36a39369390c1f6c5634a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pkh58" podUID="ffeb255f-8b45-4b39-ab1a-757accecb002" Jan 13 23:43:36.004684 containerd[1990]: time="2026-01-13T23:43:36.003339216Z" level=error msg="Failed to destroy network for sandbox \"e23fd7fe755dfe7d8cc16c8f844b1096028cf1c15e5821a736f3c0d94f18a6f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:36.008094 containerd[1990]: time="2026-01-13T23:43:36.008010564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9dddc9f7-sd8kc,Uid:db70aac6-82d4-4ef8-98ae-1ad4091dd76e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"abaf0680d02bfd57a5bb8619ac3e84778bdd6062c34c711509fefb7bf7a3cc3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:36.010077 kubelet[3562]: E0113 23:43:36.009941 3562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abaf0680d02bfd57a5bb8619ac3e84778bdd6062c34c711509fefb7bf7a3cc3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:36.010864 kubelet[3562]: E0113 23:43:36.010266 3562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abaf0680d02bfd57a5bb8619ac3e84778bdd6062c34c711509fefb7bf7a3cc3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" Jan 13 23:43:36.010864 kubelet[3562]: E0113 23:43:36.010313 3562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abaf0680d02bfd57a5bb8619ac3e84778bdd6062c34c711509fefb7bf7a3cc3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" Jan 13 23:43:36.010864 kubelet[3562]: E0113 23:43:36.010542 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c9dddc9f7-sd8kc_calico-apiserver(db70aac6-82d4-4ef8-98ae-1ad4091dd76e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c9dddc9f7-sd8kc_calico-apiserver(db70aac6-82d4-4ef8-98ae-1ad4091dd76e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abaf0680d02bfd57a5bb8619ac3e84778bdd6062c34c711509fefb7bf7a3cc3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" podUID="db70aac6-82d4-4ef8-98ae-1ad4091dd76e" Jan 13 23:43:36.016216 containerd[1990]: time="2026-01-13T23:43:36.016102021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7668ccdc96-n889l,Uid:8956cacd-e75c-4e27-9bd0-1ec2f2c552be,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e23fd7fe755dfe7d8cc16c8f844b1096028cf1c15e5821a736f3c0d94f18a6f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:36.016533 kubelet[3562]: E0113 23:43:36.016444 3562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e23fd7fe755dfe7d8cc16c8f844b1096028cf1c15e5821a736f3c0d94f18a6f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:36.016533 kubelet[3562]: E0113 23:43:36.016518 3562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e23fd7fe755dfe7d8cc16c8f844b1096028cf1c15e5821a736f3c0d94f18a6f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7668ccdc96-n889l" Jan 13 23:43:36.016679 kubelet[3562]: E0113 23:43:36.016556 3562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e23fd7fe755dfe7d8cc16c8f844b1096028cf1c15e5821a736f3c0d94f18a6f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7668ccdc96-n889l" Jan 13 23:43:36.016679 kubelet[3562]: E0113 23:43:36.016624 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7668ccdc96-n889l_calico-system(8956cacd-e75c-4e27-9bd0-1ec2f2c552be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7668ccdc96-n889l_calico-system(8956cacd-e75c-4e27-9bd0-1ec2f2c552be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e23fd7fe755dfe7d8cc16c8f844b1096028cf1c15e5821a736f3c0d94f18a6f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7668ccdc96-n889l" podUID="8956cacd-e75c-4e27-9bd0-1ec2f2c552be" Jan 13 23:43:36.057866 containerd[1990]: time="2026-01-13T23:43:36.057701677Z" level=error msg="Failed to destroy network for sandbox \"98d87053d91c5d9d999fb8759b07278dc467fdd49b8aae5d18d1d68e78f9918a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:36.063023 containerd[1990]: time="2026-01-13T23:43:36.062943253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lblk8,Uid:884e12a9-b4d3-4695-bc91-5cdf1a464d0b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"98d87053d91c5d9d999fb8759b07278dc467fdd49b8aae5d18d1d68e78f9918a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:36.063463 kubelet[3562]: E0113 23:43:36.063409 3562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98d87053d91c5d9d999fb8759b07278dc467fdd49b8aae5d18d1d68e78f9918a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:43:36.063583 kubelet[3562]: E0113 23:43:36.063493 3562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98d87053d91c5d9d999fb8759b07278dc467fdd49b8aae5d18d1d68e78f9918a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lblk8" Jan 13 23:43:36.063583 kubelet[3562]: E0113 23:43:36.063533 3562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98d87053d91c5d9d999fb8759b07278dc467fdd49b8aae5d18d1d68e78f9918a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lblk8" Jan 13 23:43:36.063714 kubelet[3562]: E0113 23:43:36.063595 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lblk8_calico-system(884e12a9-b4d3-4695-bc91-5cdf1a464d0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lblk8_calico-system(884e12a9-b4d3-4695-bc91-5cdf1a464d0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98d87053d91c5d9d999fb8759b07278dc467fdd49b8aae5d18d1d68e78f9918a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:43:36.519915 systemd[1]: run-netns-cni\x2d2922ae87\x2d308e\x2d9752\x2db0b0\x2d897046c0e244.mount: Deactivated successfully. Jan 13 23:43:36.520112 systemd[1]: run-netns-cni\x2d7842fc04\x2df5cb\x2d79bd\x2d686a\x2d2697f4c6fdb7.mount: Deactivated successfully. Jan 13 23:43:36.520280 systemd[1]: run-netns-cni\x2dcd0e48e0\x2d8fd9\x2d5c43\x2d962b\x2d016a61358a69.mount: Deactivated successfully. Jan 13 23:43:36.520424 systemd[1]: run-netns-cni\x2d6d378267\x2df7ba\x2d57de\x2df253\x2dcd7abffbfff5.mount: Deactivated successfully. Jan 13 23:43:36.520560 systemd[1]: run-netns-cni\x2dc149b459\x2d7cc5\x2d1ab9\x2deaa8\x2dc6bc650d97a1.mount: Deactivated successfully. Jan 13 23:43:42.093691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3301881616.mount: Deactivated successfully. Jan 13 23:43:42.178153 containerd[1990]: time="2026-01-13T23:43:42.178055707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:43:42.179954 containerd[1990]: time="2026-01-13T23:43:42.179630083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 13 23:43:42.181297 containerd[1990]: time="2026-01-13T23:43:42.181235791Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:43:42.185109 containerd[1990]: time="2026-01-13T23:43:42.185056543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:43:42.187198 containerd[1990]: time="2026-01-13T23:43:42.186676975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.234567259s" Jan 13 23:43:42.187198 containerd[1990]: time="2026-01-13T23:43:42.186740095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 13 23:43:42.214548 containerd[1990]: time="2026-01-13T23:43:42.214497871Z" level=info msg="CreateContainer within sandbox \"93642209e4428bf1be11addb51df0ac0e43d0368ec975c85a74d06cd4d98d96e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 23:43:42.242494 containerd[1990]: time="2026-01-13T23:43:42.242412775Z" level=info msg="Container 0c2d736e26e1be8e75db80e196036d2fec44e27bda20bde87f82b43bca45af3a: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:43:42.269328 containerd[1990]: time="2026-01-13T23:43:42.269220284Z" level=info msg="CreateContainer within sandbox \"93642209e4428bf1be11addb51df0ac0e43d0368ec975c85a74d06cd4d98d96e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0c2d736e26e1be8e75db80e196036d2fec44e27bda20bde87f82b43bca45af3a\"" Jan 13 23:43:42.272437 containerd[1990]: time="2026-01-13T23:43:42.271659860Z" level=info msg="StartContainer for \"0c2d736e26e1be8e75db80e196036d2fec44e27bda20bde87f82b43bca45af3a\"" Jan 13 23:43:42.277072 containerd[1990]: time="2026-01-13T23:43:42.276985952Z" level=info msg="connecting to shim 0c2d736e26e1be8e75db80e196036d2fec44e27bda20bde87f82b43bca45af3a" address="unix:///run/containerd/s/657bb21d44d31fa18af87af1acd163a11d9d50304bb10b0a10b5a59560a67cb1" protocol=ttrpc version=3 Jan 13 23:43:42.320502 systemd[1]: Started cri-containerd-0c2d736e26e1be8e75db80e196036d2fec44e27bda20bde87f82b43bca45af3a.scope - libcontainer container 0c2d736e26e1be8e75db80e196036d2fec44e27bda20bde87f82b43bca45af3a. Jan 13 23:43:42.439000 audit: BPF prog-id=179 op=LOAD Jan 13 23:43:42.442379 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 13 23:43:42.442489 kernel: audit: type=1334 audit(1768347822.439:587): prog-id=179 op=LOAD Jan 13 23:43:42.439000 audit[4551]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4056 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:42.450180 kernel: audit: type=1300 audit(1768347822.439:587): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4056 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:42.450292 kernel: audit: type=1327 audit(1768347822.439:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063326437333665323665316265386537356462383065313936303336 Jan 13 23:43:42.439000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063326437333665323665316265386537356462383065313936303336 Jan 13 23:43:42.457712 kernel: audit: type=1334 audit(1768347822.440:588): prog-id=180 op=LOAD Jan 13 23:43:42.457792 kernel: audit: type=1300 audit(1768347822.440:588): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4056 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:42.440000 audit: BPF prog-id=180 op=LOAD Jan 13 23:43:42.440000 audit[4551]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4056 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:42.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063326437333665323665316265386537356462383065313936303336 Jan 13 23:43:42.470385 kernel: audit: type=1327 audit(1768347822.440:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063326437333665323665316265386537356462383065313936303336 Jan 13 23:43:42.443000 audit: BPF prog-id=180 op=UNLOAD Jan 13 23:43:42.472754 kernel: audit: type=1334 audit(1768347822.443:589): prog-id=180 op=UNLOAD Jan 13 23:43:42.443000 audit[4551]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4056 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:42.479192 kernel: audit: type=1300 audit(1768347822.443:589): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4056 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:42.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063326437333665323665316265386537356462383065313936303336 Jan 13 23:43:42.486659 kernel: audit: type=1327 audit(1768347822.443:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063326437333665323665316265386537356462383065313936303336 Jan 13 23:43:42.486792 kernel: audit: type=1334 audit(1768347822.443:590): prog-id=179 op=UNLOAD Jan 13 23:43:42.443000 audit: BPF prog-id=179 op=UNLOAD Jan 13 23:43:42.443000 audit[4551]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4056 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:42.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063326437333665323665316265386537356462383065313936303336 Jan 13 23:43:42.443000 audit: BPF prog-id=181 op=LOAD Jan 13 23:43:42.443000 audit[4551]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4056 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:42.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063326437333665323665316265386537356462383065313936303336 Jan 13 23:43:42.526089 containerd[1990]: time="2026-01-13T23:43:42.525931761Z" level=info msg="StartContainer for \"0c2d736e26e1be8e75db80e196036d2fec44e27bda20bde87f82b43bca45af3a\" returns successfully" Jan 13 23:43:42.829328 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 23:43:42.829490 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 23:43:43.074165 kubelet[3562]: I0113 23:43:43.073899 3562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sntjv" podStartSLOduration=1.7621428670000001 podStartE2EDuration="18.073866572s" podCreationTimestamp="2026-01-13 23:43:25 +0000 UTC" firstStartedPulling="2026-01-13 23:43:25.877033542 +0000 UTC m=+37.569776539" lastFinishedPulling="2026-01-13 23:43:42.188757247 +0000 UTC m=+53.881500244" observedRunningTime="2026-01-13 23:43:43.065297324 +0000 UTC m=+54.758040309" watchObservedRunningTime="2026-01-13 23:43:43.073866572 +0000 UTC m=+54.766609557" Jan 13 23:43:43.157172 kubelet[3562]: I0113 23:43:43.156998 3562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8956cacd-e75c-4e27-9bd0-1ec2f2c552be-whisker-ca-bundle\") pod \"8956cacd-e75c-4e27-9bd0-1ec2f2c552be\" (UID: \"8956cacd-e75c-4e27-9bd0-1ec2f2c552be\") " Jan 13 23:43:43.157172 kubelet[3562]: I0113 23:43:43.157113 3562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8956cacd-e75c-4e27-9bd0-1ec2f2c552be-whisker-backend-key-pair\") pod \"8956cacd-e75c-4e27-9bd0-1ec2f2c552be\" (UID: \"8956cacd-e75c-4e27-9bd0-1ec2f2c552be\") " Jan 13 23:43:43.157396 kubelet[3562]: I0113 23:43:43.157208 3562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh8d6\" (UniqueName: \"kubernetes.io/projected/8956cacd-e75c-4e27-9bd0-1ec2f2c552be-kube-api-access-sh8d6\") pod \"8956cacd-e75c-4e27-9bd0-1ec2f2c552be\" (UID: \"8956cacd-e75c-4e27-9bd0-1ec2f2c552be\") " Jan 13 23:43:43.160538 kubelet[3562]: I0113 23:43:43.159857 3562 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8956cacd-e75c-4e27-9bd0-1ec2f2c552be-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8956cacd-e75c-4e27-9bd0-1ec2f2c552be" (UID: "8956cacd-e75c-4e27-9bd0-1ec2f2c552be"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 13 23:43:43.178453 systemd[1]: var-lib-kubelet-pods-8956cacd\x2de75c\x2d4e27\x2d9bd0\x2d1ec2f2c552be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsh8d6.mount: Deactivated successfully. Jan 13 23:43:43.190745 systemd[1]: var-lib-kubelet-pods-8956cacd\x2de75c\x2d4e27\x2d9bd0\x2d1ec2f2c552be-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 13 23:43:43.201335 kubelet[3562]: I0113 23:43:43.201269 3562 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8956cacd-e75c-4e27-9bd0-1ec2f2c552be-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8956cacd-e75c-4e27-9bd0-1ec2f2c552be" (UID: "8956cacd-e75c-4e27-9bd0-1ec2f2c552be"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 13 23:43:43.202431 kubelet[3562]: I0113 23:43:43.201624 3562 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8956cacd-e75c-4e27-9bd0-1ec2f2c552be-kube-api-access-sh8d6" (OuterVolumeSpecName: "kube-api-access-sh8d6") pod "8956cacd-e75c-4e27-9bd0-1ec2f2c552be" (UID: "8956cacd-e75c-4e27-9bd0-1ec2f2c552be"). InnerVolumeSpecName "kube-api-access-sh8d6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 13 23:43:43.258408 kubelet[3562]: I0113 23:43:43.258346 3562 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sh8d6\" (UniqueName: \"kubernetes.io/projected/8956cacd-e75c-4e27-9bd0-1ec2f2c552be-kube-api-access-sh8d6\") on node \"ip-172-31-22-81\" DevicePath \"\"" Jan 13 23:43:43.258408 kubelet[3562]: I0113 23:43:43.258406 3562 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8956cacd-e75c-4e27-9bd0-1ec2f2c552be-whisker-ca-bundle\") on node \"ip-172-31-22-81\" DevicePath \"\"" Jan 13 23:43:43.258638 kubelet[3562]: I0113 23:43:43.258433 3562 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8956cacd-e75c-4e27-9bd0-1ec2f2c552be-whisker-backend-key-pair\") on node \"ip-172-31-22-81\" DevicePath \"\"" Jan 13 23:43:44.011122 systemd[1]: Removed slice kubepods-besteffort-pod8956cacd_e75c_4e27_9bd0_1ec2f2c552be.slice - libcontainer container kubepods-besteffort-pod8956cacd_e75c_4e27_9bd0_1ec2f2c552be.slice. Jan 13 23:43:44.150973 systemd[1]: Created slice kubepods-besteffort-podd5bf06bf_e923_4a15_849e_51c08230b88e.slice - libcontainer container kubepods-besteffort-podd5bf06bf_e923_4a15_849e_51c08230b88e.slice. Jan 13 23:43:44.266940 kubelet[3562]: I0113 23:43:44.266772 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4qrr\" (UniqueName: \"kubernetes.io/projected/d5bf06bf-e923-4a15-849e-51c08230b88e-kube-api-access-f4qrr\") pod \"whisker-78b56f8d66-w4wmx\" (UID: \"d5bf06bf-e923-4a15-849e-51c08230b88e\") " pod="calico-system/whisker-78b56f8d66-w4wmx" Jan 13 23:43:44.266940 kubelet[3562]: I0113 23:43:44.266879 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5bf06bf-e923-4a15-849e-51c08230b88e-whisker-ca-bundle\") pod \"whisker-78b56f8d66-w4wmx\" (UID: \"d5bf06bf-e923-4a15-849e-51c08230b88e\") " pod="calico-system/whisker-78b56f8d66-w4wmx" Jan 13 23:43:44.267616 kubelet[3562]: I0113 23:43:44.267316 3562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d5bf06bf-e923-4a15-849e-51c08230b88e-whisker-backend-key-pair\") pod \"whisker-78b56f8d66-w4wmx\" (UID: \"d5bf06bf-e923-4a15-849e-51c08230b88e\") " pod="calico-system/whisker-78b56f8d66-w4wmx" Jan 13 23:43:44.460407 containerd[1990]: time="2026-01-13T23:43:44.460325542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78b56f8d66-w4wmx,Uid:d5bf06bf-e923-4a15-849e-51c08230b88e,Namespace:calico-system,Attempt:0,}" Jan 13 23:43:44.603577 kubelet[3562]: I0113 23:43:44.603172 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8956cacd-e75c-4e27-9bd0-1ec2f2c552be" path="/var/lib/kubelet/pods/8956cacd-e75c-4e27-9bd0-1ec2f2c552be/volumes" Jan 13 23:43:45.406000 audit: BPF prog-id=182 op=LOAD Jan 13 23:43:45.406000 audit[4798]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd1104988 a2=98 a3=ffffd1104978 items=0 ppid=4696 pid=4798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.406000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:43:45.407000 audit: BPF prog-id=182 op=UNLOAD Jan 13 23:43:45.407000 audit[4798]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd1104958 a3=0 items=0 ppid=4696 pid=4798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.407000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:43:45.407000 audit: BPF prog-id=183 op=LOAD Jan 13 23:43:45.407000 audit[4798]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd1104838 a2=74 a3=95 items=0 ppid=4696 pid=4798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.407000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:43:45.407000 audit: BPF prog-id=183 op=UNLOAD Jan 13 23:43:45.407000 audit[4798]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4696 pid=4798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.407000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:43:45.407000 audit: BPF prog-id=184 op=LOAD Jan 13 23:43:45.407000 audit[4798]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd1104868 a2=40 a3=ffffd1104898 items=0 ppid=4696 pid=4798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.407000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:43:45.407000 audit: BPF prog-id=184 op=UNLOAD Jan 13 23:43:45.407000 audit[4798]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd1104898 items=0 ppid=4696 pid=4798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.407000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:43:45.413000 audit: BPF prog-id=185 op=LOAD Jan 13 23:43:45.413000 audit[4799]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdc2779b8 a2=98 a3=ffffdc2779a8 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.413000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.413000 audit: BPF prog-id=185 op=UNLOAD Jan 13 23:43:45.413000 audit[4799]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdc277988 a3=0 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.413000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.413000 audit: BPF prog-id=186 op=LOAD Jan 13 23:43:45.413000 audit[4799]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdc277648 a2=74 a3=95 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.413000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.413000 audit: BPF prog-id=186 op=UNLOAD Jan 13 23:43:45.413000 audit[4799]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.413000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.413000 audit: BPF prog-id=187 op=LOAD Jan 13 23:43:45.413000 audit[4799]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdc2776a8 a2=94 a3=2 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.413000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.415000 audit: BPF prog-id=187 op=UNLOAD Jan 13 23:43:45.415000 audit[4799]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.415000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.623000 audit: BPF prog-id=188 op=LOAD Jan 13 23:43:45.623000 audit[4799]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdc277668 a2=40 a3=ffffdc277698 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.623000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.623000 audit: BPF prog-id=188 op=UNLOAD Jan 13 23:43:45.623000 audit[4799]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffdc277698 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.623000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.642000 audit: BPF prog-id=189 op=LOAD Jan 13 23:43:45.642000 audit[4799]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdc277678 a2=94 a3=4 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.642000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.642000 audit: BPF prog-id=189 op=UNLOAD Jan 13 23:43:45.642000 audit[4799]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.642000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.643000 audit: BPF prog-id=190 op=LOAD Jan 13 23:43:45.643000 audit[4799]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffdc2774b8 a2=94 a3=5 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.643000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.643000 audit: BPF prog-id=190 op=UNLOAD Jan 13 23:43:45.643000 audit[4799]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.643000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.644000 audit: BPF prog-id=191 op=LOAD Jan 13 23:43:45.644000 audit[4799]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdc2776e8 a2=94 a3=6 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.644000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.644000 audit: BPF prog-id=191 op=UNLOAD Jan 13 23:43:45.644000 audit[4799]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.644000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.645000 audit: BPF prog-id=192 op=LOAD Jan 13 23:43:45.645000 audit[4799]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdc276eb8 a2=94 a3=83 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.645000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.645000 audit: BPF prog-id=193 op=LOAD Jan 13 23:43:45.645000 audit[4799]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffdc276c78 a2=94 a3=2 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.645000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.646000 audit: BPF prog-id=193 op=UNLOAD Jan 13 23:43:45.646000 audit[4799]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.646000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.647000 audit: BPF prog-id=192 op=UNLOAD Jan 13 23:43:45.647000 audit[4799]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3d1aa620 a3=3d19db00 items=0 ppid=4696 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.647000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:43:45.673000 audit: BPF prog-id=194 op=LOAD Jan 13 23:43:45.673000 audit[4802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffda59eee8 a2=98 a3=ffffda59eed8 items=0 ppid=4696 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.673000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:43:45.673000 audit: BPF prog-id=194 op=UNLOAD Jan 13 23:43:45.673000 audit[4802]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffda59eeb8 a3=0 items=0 ppid=4696 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.673000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:43:45.673000 audit: BPF prog-id=195 op=LOAD Jan 13 23:43:45.673000 audit[4802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffda59ed98 a2=74 a3=95 items=0 ppid=4696 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.673000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:43:45.673000 audit: BPF prog-id=195 op=UNLOAD Jan 13 23:43:45.673000 audit[4802]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4696 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.673000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:43:45.673000 audit: BPF prog-id=196 op=LOAD Jan 13 23:43:45.673000 audit[4802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffda59edc8 a2=40 a3=ffffda59edf8 items=0 ppid=4696 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.673000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:43:45.673000 audit: BPF prog-id=196 op=UNLOAD Jan 13 23:43:45.673000 audit[4802]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffda59edf8 items=0 ppid=4696 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.673000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:43:45.836994 systemd-networkd[1570]: vxlan.calico: Link UP Jan 13 23:43:45.837032 systemd-networkd[1570]: vxlan.calico: Gained carrier Jan 13 23:43:45.855115 (udev-worker)[4590]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:43:45.898020 (udev-worker)[4591]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:43:45.899000 audit: BPF prog-id=197 op=LOAD Jan 13 23:43:45.899000 audit[4835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffccb52aa8 a2=98 a3=ffffccb52a98 items=0 ppid=4696 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.899000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:43:45.899000 audit: BPF prog-id=197 op=UNLOAD Jan 13 23:43:45.899000 audit[4835]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffccb52a78 a3=0 items=0 ppid=4696 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.899000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:43:45.901000 audit: BPF prog-id=198 op=LOAD Jan 13 23:43:45.901000 audit[4835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffccb52788 a2=74 a3=95 items=0 ppid=4696 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.901000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:43:45.903000 audit: BPF prog-id=198 op=UNLOAD Jan 13 23:43:45.903000 audit[4835]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4696 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.903000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:43:45.903000 audit: BPF prog-id=199 op=LOAD Jan 13 23:43:45.903000 audit[4835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffccb527e8 a2=94 a3=2 items=0 ppid=4696 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.903000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:43:45.903000 audit: BPF prog-id=199 op=UNLOAD Jan 13 23:43:45.903000 audit[4835]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4696 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.903000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:43:45.903000 audit: BPF prog-id=200 op=LOAD Jan 13 23:43:45.903000 audit[4835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffccb52668 a2=40 a3=ffffccb52698 items=0 ppid=4696 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.903000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:43:45.903000 audit: BPF prog-id=200 op=UNLOAD Jan 13 23:43:45.903000 audit[4835]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffccb52698 items=0 ppid=4696 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.903000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:43:45.903000 audit: BPF prog-id=201 op=LOAD Jan 13 23:43:45.903000 audit[4835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffccb527b8 a2=94 a3=b7 items=0 ppid=4696 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.903000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:43:45.904000 audit: BPF prog-id=201 op=UNLOAD Jan 13 23:43:45.904000 audit[4835]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4696 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.904000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:43:45.907000 audit: BPF prog-id=202 op=LOAD Jan 13 23:43:45.907000 audit[4835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffccb51e68 a2=94 a3=2 items=0 ppid=4696 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.907000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:43:45.912000 audit: BPF prog-id=202 op=UNLOAD Jan 13 23:43:45.912000 audit[4835]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4696 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.912000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:43:45.912000 audit: BPF prog-id=203 op=LOAD Jan 13 23:43:45.912000 audit[4835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffccb51ff8 a2=94 a3=30 items=0 ppid=4696 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.912000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:43:45.927000 audit: BPF prog-id=204 op=LOAD Jan 13 23:43:45.927000 audit[4840]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd728de38 a2=98 a3=ffffd728de28 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.927000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:45.929000 audit: BPF prog-id=204 op=UNLOAD Jan 13 23:43:45.929000 audit[4840]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd728de08 a3=0 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.929000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:45.931000 audit: BPF prog-id=205 op=LOAD Jan 13 23:43:45.931000 audit[4840]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd728dac8 a2=74 a3=95 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.931000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:45.932000 audit: BPF prog-id=205 op=UNLOAD Jan 13 23:43:45.932000 audit[4840]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.932000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:45.932000 audit: BPF prog-id=206 op=LOAD Jan 13 23:43:45.932000 audit[4840]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd728db28 a2=94 a3=2 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.932000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:45.936000 audit: BPF prog-id=206 op=UNLOAD Jan 13 23:43:45.936000 audit[4840]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:45.936000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:46.122016 systemd-networkd[1570]: cali03e1dff9f78: Link UP Jan 13 23:43:46.123103 systemd-networkd[1570]: cali03e1dff9f78: Gained carrier Jan 13 23:43:46.200000 audit: BPF prog-id=207 op=LOAD Jan 13 23:43:46.200000 audit[4840]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd728dae8 a2=40 a3=ffffd728db18 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.200000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:46.200000 audit: BPF prog-id=207 op=UNLOAD Jan 13 23:43:46.200000 audit[4840]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffd728db18 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.200000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:46.219000 audit: BPF prog-id=208 op=LOAD Jan 13 23:43:46.219000 audit[4840]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd728daf8 a2=94 a3=4 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:46.219000 audit: BPF prog-id=208 op=UNLOAD Jan 13 23:43:46.219000 audit[4840]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:46.220000 audit: BPF prog-id=209 op=LOAD Jan 13 23:43:46.220000 audit[4840]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd728d938 a2=94 a3=5 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.220000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:46.220000 audit: BPF prog-id=209 op=UNLOAD Jan 13 23:43:46.220000 audit[4840]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.220000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:46.220000 audit: BPF prog-id=210 op=LOAD Jan 13 23:43:46.220000 audit[4840]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd728db68 a2=94 a3=6 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.220000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:46.220000 audit: BPF prog-id=210 op=UNLOAD Jan 13 23:43:46.220000 audit[4840]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.220000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:46.221000 audit: BPF prog-id=211 op=LOAD Jan 13 23:43:46.221000 audit[4840]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd728d338 a2=94 a3=83 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.221000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:46.221000 audit: BPF prog-id=212 op=LOAD Jan 13 23:43:46.221000 audit[4840]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffd728d0f8 a2=94 a3=2 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.221000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:46.221000 audit: BPF prog-id=212 op=UNLOAD Jan 13 23:43:46.221000 audit[4840]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.221000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:46.222000 audit: BPF prog-id=211 op=UNLOAD Jan 13 23:43:46.222000 audit[4840]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=33528620 a3=3351bb00 items=0 ppid=4696 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.222000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:43:46.234000 audit: BPF prog-id=203 op=UNLOAD Jan 13 23:43:46.234000 audit[4696]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=40007a8940 a2=0 a3=0 items=0 ppid=4678 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.234000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 13 23:43:46.277333 containerd[1990]: 2026-01-13 23:43:44.599 [INFO][4667] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 23:43:46.277333 containerd[1990]: 2026-01-13 23:43:45.767 [INFO][4667] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--81-k8s-whisker--78b56f8d66--w4wmx-eth0 whisker-78b56f8d66- calico-system d5bf06bf-e923-4a15-849e-51c08230b88e 917 0 2026-01-13 23:43:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78b56f8d66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-22-81 whisker-78b56f8d66-w4wmx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali03e1dff9f78 [] [] }} ContainerID="6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" Namespace="calico-system" Pod="whisker-78b56f8d66-w4wmx" WorkloadEndpoint="ip--172--31--22--81-k8s-whisker--78b56f8d66--w4wmx-" Jan 13 23:43:46.277333 containerd[1990]: 2026-01-13 23:43:45.767 [INFO][4667] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" Namespace="calico-system" Pod="whisker-78b56f8d66-w4wmx" WorkloadEndpoint="ip--172--31--22--81-k8s-whisker--78b56f8d66--w4wmx-eth0" Jan 13 23:43:46.277333 containerd[1990]: 2026-01-13 23:43:45.994 [INFO][4819] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" HandleID="k8s-pod-network.6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" Workload="ip--172--31--22--81-k8s-whisker--78b56f8d66--w4wmx-eth0" Jan 13 23:43:46.278907 containerd[1990]: 2026-01-13 23:43:45.994 [INFO][4819] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" HandleID="k8s-pod-network.6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" Workload="ip--172--31--22--81-k8s-whisker--78b56f8d66--w4wmx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330140), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-81", "pod":"whisker-78b56f8d66-w4wmx", "timestamp":"2026-01-13 23:43:45.99405269 +0000 UTC"}, Hostname:"ip-172-31-22-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:43:46.278907 containerd[1990]: 2026-01-13 23:43:45.994 [INFO][4819] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:43:46.278907 containerd[1990]: 2026-01-13 23:43:45.995 [INFO][4819] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:43:46.278907 containerd[1990]: 2026-01-13 23:43:45.996 [INFO][4819] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-81' Jan 13 23:43:46.278907 containerd[1990]: 2026-01-13 23:43:46.014 [INFO][4819] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" host="ip-172-31-22-81" Jan 13 23:43:46.278907 containerd[1990]: 2026-01-13 23:43:46.027 [INFO][4819] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-81" Jan 13 23:43:46.278907 containerd[1990]: 2026-01-13 23:43:46.036 [INFO][4819] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:46.278907 containerd[1990]: 2026-01-13 23:43:46.040 [INFO][4819] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:46.278907 containerd[1990]: 2026-01-13 23:43:46.044 [INFO][4819] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:46.280003 containerd[1990]: 2026-01-13 23:43:46.045 [INFO][4819] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" host="ip-172-31-22-81" Jan 13 23:43:46.280003 containerd[1990]: 2026-01-13 23:43:46.048 [INFO][4819] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88 Jan 13 23:43:46.280003 containerd[1990]: 2026-01-13 23:43:46.058 [INFO][4819] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" host="ip-172-31-22-81" Jan 13 23:43:46.280003 containerd[1990]: 2026-01-13 23:43:46.075 [INFO][4819] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.129/26] block=192.168.127.128/26 handle="k8s-pod-network.6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" host="ip-172-31-22-81" Jan 13 23:43:46.280003 containerd[1990]: 2026-01-13 23:43:46.075 [INFO][4819] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.129/26] handle="k8s-pod-network.6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" host="ip-172-31-22-81" Jan 13 23:43:46.280003 containerd[1990]: 2026-01-13 23:43:46.075 [INFO][4819] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:43:46.280003 containerd[1990]: 2026-01-13 23:43:46.075 [INFO][4819] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.129/26] IPv6=[] ContainerID="6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" HandleID="k8s-pod-network.6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" Workload="ip--172--31--22--81-k8s-whisker--78b56f8d66--w4wmx-eth0" Jan 13 23:43:46.280905 containerd[1990]: 2026-01-13 23:43:46.086 [INFO][4667] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" Namespace="calico-system" Pod="whisker-78b56f8d66-w4wmx" WorkloadEndpoint="ip--172--31--22--81-k8s-whisker--78b56f8d66--w4wmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-whisker--78b56f8d66--w4wmx-eth0", GenerateName:"whisker-78b56f8d66-", Namespace:"calico-system", SelfLink:"", UID:"d5bf06bf-e923-4a15-849e-51c08230b88e", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 43, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78b56f8d66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"", Pod:"whisker-78b56f8d66-w4wmx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.127.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali03e1dff9f78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:46.280905 containerd[1990]: 2026-01-13 23:43:46.087 [INFO][4667] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.129/32] ContainerID="6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" Namespace="calico-system" Pod="whisker-78b56f8d66-w4wmx" WorkloadEndpoint="ip--172--31--22--81-k8s-whisker--78b56f8d66--w4wmx-eth0" Jan 13 23:43:46.281115 containerd[1990]: 2026-01-13 23:43:46.087 [INFO][4667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03e1dff9f78 ContainerID="6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" Namespace="calico-system" Pod="whisker-78b56f8d66-w4wmx" WorkloadEndpoint="ip--172--31--22--81-k8s-whisker--78b56f8d66--w4wmx-eth0" Jan 13 23:43:46.281115 containerd[1990]: 2026-01-13 23:43:46.142 [INFO][4667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" Namespace="calico-system" Pod="whisker-78b56f8d66-w4wmx" WorkloadEndpoint="ip--172--31--22--81-k8s-whisker--78b56f8d66--w4wmx-eth0" Jan 13 23:43:46.282035 containerd[1990]: 2026-01-13 23:43:46.144 [INFO][4667] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" Namespace="calico-system" Pod="whisker-78b56f8d66-w4wmx" WorkloadEndpoint="ip--172--31--22--81-k8s-whisker--78b56f8d66--w4wmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-whisker--78b56f8d66--w4wmx-eth0", GenerateName:"whisker-78b56f8d66-", Namespace:"calico-system", SelfLink:"", UID:"d5bf06bf-e923-4a15-849e-51c08230b88e", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 43, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78b56f8d66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88", Pod:"whisker-78b56f8d66-w4wmx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.127.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali03e1dff9f78", MAC:"ce:81:45:55:ad:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:46.282268 containerd[1990]: 2026-01-13 23:43:46.273 [INFO][4667] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" Namespace="calico-system" Pod="whisker-78b56f8d66-w4wmx" WorkloadEndpoint="ip--172--31--22--81-k8s-whisker--78b56f8d66--w4wmx-eth0" Jan 13 23:43:46.349156 containerd[1990]: time="2026-01-13T23:43:46.348960864Z" level=info msg="connecting to shim 6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88" address="unix:///run/containerd/s/859ef35ecc3f83f547c1b00765985c97daf569766d13ff3dea1ef181dd8239e9" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:43:46.431776 systemd[1]: Started cri-containerd-6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88.scope - libcontainer container 6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88. Jan 13 23:43:46.446000 audit[4906]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4906 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:43:46.446000 audit[4906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc4caf2c0 a2=0 a3=ffff8da88fa8 items=0 ppid=4696 pid=4906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.446000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:43:46.449000 audit[4903]: NETFILTER_CFG table=raw:124 family=2 entries=21 op=nft_register_chain pid=4903 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:43:46.449000 audit[4903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffda856c70 a2=0 a3=ffffb9ccbfa8 items=0 ppid=4696 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.449000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:43:46.465000 audit[4911]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=4911 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:43:46.465000 audit[4911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffd24f1970 a2=0 a3=ffffafaccfa8 items=0 ppid=4696 pid=4911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.465000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:43:46.477000 audit[4910]: NETFILTER_CFG table=filter:126 family=2 entries=39 op=nft_register_chain pid=4910 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:43:46.477000 audit[4910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=fffffd2dc1c0 a2=0 a3=ffffaf9e6fa8 items=0 ppid=4696 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.477000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:43:46.502000 audit: BPF prog-id=213 op=LOAD Jan 13 23:43:46.504000 audit: BPF prog-id=214 op=LOAD Jan 13 23:43:46.504000 audit[4890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4875 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664373163626439393861633633653131653663396462386233346633 Jan 13 23:43:46.505000 audit: BPF prog-id=214 op=UNLOAD Jan 13 23:43:46.505000 audit[4890]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4875 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664373163626439393861633633653131653663396462386233346633 Jan 13 23:43:46.505000 audit: BPF prog-id=215 op=LOAD Jan 13 23:43:46.505000 audit[4890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4875 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664373163626439393861633633653131653663396462386233346633 Jan 13 23:43:46.506000 audit: BPF prog-id=216 op=LOAD Jan 13 23:43:46.506000 audit[4890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4875 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664373163626439393861633633653131653663396462386233346633 Jan 13 23:43:46.506000 audit: BPF prog-id=216 op=UNLOAD Jan 13 23:43:46.506000 audit[4890]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4875 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664373163626439393861633633653131653663396462386233346633 Jan 13 23:43:46.507000 audit: BPF prog-id=215 op=UNLOAD Jan 13 23:43:46.507000 audit[4890]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4875 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664373163626439393861633633653131653663396462386233346633 Jan 13 23:43:46.507000 audit: BPF prog-id=217 op=LOAD Jan 13 23:43:46.507000 audit[4890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4875 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664373163626439393861633633653131653663396462386233346633 Jan 13 23:43:46.562000 audit[4924]: NETFILTER_CFG table=filter:127 family=2 entries=59 op=nft_register_chain pid=4924 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:43:46.562000 audit[4924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=35860 a0=3 a1=ffffcefd94c0 a2=0 a3=ffff81c66fa8 items=0 ppid=4696 pid=4924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.562000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:43:46.592353 containerd[1990]: time="2026-01-13T23:43:46.592118317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78b56f8d66-w4wmx,Uid:d5bf06bf-e923-4a15-849e-51c08230b88e,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d71cbd998ac63e11e6c9db8b34f38a3f7d5560c80bcbcd61f6aeb5a38ebaa88\"" Jan 13 23:43:46.597687 containerd[1990]: time="2026-01-13T23:43:46.596007481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:43:46.597687 containerd[1990]: time="2026-01-13T23:43:46.597546649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b8d4f74f9-kqpzz,Uid:3317981a-15b4-41f8-a3cf-26fbd9c6fbf1,Namespace:calico-system,Attempt:0,}" Jan 13 23:43:46.875725 systemd-networkd[1570]: cali56b756c4269: Link UP Jan 13 23:43:46.878743 systemd-networkd[1570]: cali56b756c4269: Gained carrier Jan 13 23:43:46.893492 containerd[1990]: time="2026-01-13T23:43:46.893427795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:43:46.898451 containerd[1990]: time="2026-01-13T23:43:46.897867699Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:43:46.898451 containerd[1990]: time="2026-01-13T23:43:46.897976875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:46.900559 kubelet[3562]: E0113 23:43:46.899687 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:43:46.900559 kubelet[3562]: E0113 23:43:46.899758 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:43:46.915588 containerd[1990]: 2026-01-13 23:43:46.723 [INFO][4935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--81-k8s-calico--kube--controllers--7b8d4f74f9--kqpzz-eth0 calico-kube-controllers-7b8d4f74f9- calico-system 3317981a-15b4-41f8-a3cf-26fbd9c6fbf1 846 0 2026-01-13 23:43:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b8d4f74f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-22-81 calico-kube-controllers-7b8d4f74f9-kqpzz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali56b756c4269 [] [] }} ContainerID="bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" Namespace="calico-system" Pod="calico-kube-controllers-7b8d4f74f9-kqpzz" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--kube--controllers--7b8d4f74f9--kqpzz-" Jan 13 23:43:46.915588 containerd[1990]: 2026-01-13 23:43:46.724 [INFO][4935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" Namespace="calico-system" Pod="calico-kube-controllers-7b8d4f74f9-kqpzz" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--kube--controllers--7b8d4f74f9--kqpzz-eth0" Jan 13 23:43:46.915588 containerd[1990]: 2026-01-13 23:43:46.784 [INFO][4947] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" HandleID="k8s-pod-network.bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" Workload="ip--172--31--22--81-k8s-calico--kube--controllers--7b8d4f74f9--kqpzz-eth0" Jan 13 23:43:46.917221 containerd[1990]: 2026-01-13 23:43:46.784 [INFO][4947] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" HandleID="k8s-pod-network.bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" Workload="ip--172--31--22--81-k8s-calico--kube--controllers--7b8d4f74f9--kqpzz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-81", "pod":"calico-kube-controllers-7b8d4f74f9-kqpzz", "timestamp":"2026-01-13 23:43:46.78437633 +0000 UTC"}, Hostname:"ip-172-31-22-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:43:46.917221 containerd[1990]: 2026-01-13 23:43:46.784 [INFO][4947] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:43:46.917221 containerd[1990]: 2026-01-13 23:43:46.784 [INFO][4947] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:43:46.917221 containerd[1990]: 2026-01-13 23:43:46.784 [INFO][4947] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-81' Jan 13 23:43:46.917221 containerd[1990]: 2026-01-13 23:43:46.801 [INFO][4947] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" host="ip-172-31-22-81" Jan 13 23:43:46.917221 containerd[1990]: 2026-01-13 23:43:46.808 [INFO][4947] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-81" Jan 13 23:43:46.917221 containerd[1990]: 2026-01-13 23:43:46.816 [INFO][4947] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:46.917221 containerd[1990]: 2026-01-13 23:43:46.820 [INFO][4947] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:46.917221 containerd[1990]: 2026-01-13 23:43:46.823 [INFO][4947] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:46.919730 kubelet[3562]: E0113 23:43:46.916117 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d4af49205dc04bf3a26511b093d7fa31,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f4qrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78b56f8d66-w4wmx_calico-system(d5bf06bf-e923-4a15-849e-51c08230b88e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:43:46.920195 containerd[1990]: 2026-01-13 23:43:46.824 [INFO][4947] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" host="ip-172-31-22-81" Jan 13 23:43:46.920195 containerd[1990]: 2026-01-13 23:43:46.826 [INFO][4947] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98 Jan 13 23:43:46.920195 containerd[1990]: 2026-01-13 23:43:46.855 [INFO][4947] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" host="ip-172-31-22-81" Jan 13 23:43:46.920195 containerd[1990]: 2026-01-13 23:43:46.864 [INFO][4947] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.130/26] block=192.168.127.128/26 handle="k8s-pod-network.bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" host="ip-172-31-22-81" Jan 13 23:43:46.920195 containerd[1990]: 2026-01-13 23:43:46.865 [INFO][4947] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.130/26] handle="k8s-pod-network.bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" host="ip-172-31-22-81" Jan 13 23:43:46.920195 containerd[1990]: 2026-01-13 23:43:46.865 [INFO][4947] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:43:46.920195 containerd[1990]: 2026-01-13 23:43:46.865 [INFO][4947] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.130/26] IPv6=[] ContainerID="bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" HandleID="k8s-pod-network.bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" Workload="ip--172--31--22--81-k8s-calico--kube--controllers--7b8d4f74f9--kqpzz-eth0" Jan 13 23:43:46.921278 containerd[1990]: 2026-01-13 23:43:46.870 [INFO][4935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" Namespace="calico-system" Pod="calico-kube-controllers-7b8d4f74f9-kqpzz" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--kube--controllers--7b8d4f74f9--kqpzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-calico--kube--controllers--7b8d4f74f9--kqpzz-eth0", GenerateName:"calico-kube-controllers-7b8d4f74f9-", Namespace:"calico-system", SelfLink:"", UID:"3317981a-15b4-41f8-a3cf-26fbd9c6fbf1", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b8d4f74f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"", Pod:"calico-kube-controllers-7b8d4f74f9-kqpzz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56b756c4269", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:46.921441 containerd[1990]: 2026-01-13 23:43:46.870 [INFO][4935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.130/32] ContainerID="bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" Namespace="calico-system" Pod="calico-kube-controllers-7b8d4f74f9-kqpzz" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--kube--controllers--7b8d4f74f9--kqpzz-eth0" Jan 13 23:43:46.921441 containerd[1990]: 2026-01-13 23:43:46.871 [INFO][4935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56b756c4269 ContainerID="bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" Namespace="calico-system" Pod="calico-kube-controllers-7b8d4f74f9-kqpzz" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--kube--controllers--7b8d4f74f9--kqpzz-eth0" Jan 13 23:43:46.921441 containerd[1990]: 2026-01-13 23:43:46.879 [INFO][4935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" Namespace="calico-system" Pod="calico-kube-controllers-7b8d4f74f9-kqpzz" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--kube--controllers--7b8d4f74f9--kqpzz-eth0" Jan 13 23:43:46.921604 containerd[1990]: 2026-01-13 23:43:46.880 [INFO][4935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" Namespace="calico-system" Pod="calico-kube-controllers-7b8d4f74f9-kqpzz" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--kube--controllers--7b8d4f74f9--kqpzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-calico--kube--controllers--7b8d4f74f9--kqpzz-eth0", GenerateName:"calico-kube-controllers-7b8d4f74f9-", Namespace:"calico-system", SelfLink:"", UID:"3317981a-15b4-41f8-a3cf-26fbd9c6fbf1", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b8d4f74f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98", Pod:"calico-kube-controllers-7b8d4f74f9-kqpzz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56b756c4269", MAC:"ae:7e:0c:bb:3f:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:46.921745 containerd[1990]: 2026-01-13 23:43:46.906 [INFO][4935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" Namespace="calico-system" Pod="calico-kube-controllers-7b8d4f74f9-kqpzz" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--kube--controllers--7b8d4f74f9--kqpzz-eth0" Jan 13 23:43:46.923033 containerd[1990]: time="2026-01-13T23:43:46.922392111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:43:46.956000 audit[4963]: NETFILTER_CFG table=filter:128 family=2 entries=36 op=nft_register_chain pid=4963 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:43:46.956000 audit[4963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=fffffe04e250 a2=0 a3=ffffb3f0efa8 items=0 ppid=4696 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:46.956000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:43:46.984058 containerd[1990]: time="2026-01-13T23:43:46.983180271Z" level=info msg="connecting to shim bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98" address="unix:///run/containerd/s/237a4a3dc8c4679f18037f5437ceac60d2add18a97c1d5b48ed775c1bdc33782" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:43:47.039510 systemd[1]: Started cri-containerd-bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98.scope - libcontainer container bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98. Jan 13 23:43:47.064000 audit: BPF prog-id=218 op=LOAD Jan 13 23:43:47.065000 audit: BPF prog-id=219 op=LOAD Jan 13 23:43:47.065000 audit[4984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4972 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:47.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264343562616630663534636134323266613435386137323537636637 Jan 13 23:43:47.065000 audit: BPF prog-id=219 op=UNLOAD Jan 13 23:43:47.065000 audit[4984]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:47.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264343562616630663534636134323266613435386137323537636637 Jan 13 23:43:47.065000 audit: BPF prog-id=220 op=LOAD Jan 13 23:43:47.065000 audit[4984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4972 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:47.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264343562616630663534636134323266613435386137323537636637 Jan 13 23:43:47.066000 audit: BPF prog-id=221 op=LOAD Jan 13 23:43:47.066000 audit[4984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4972 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:47.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264343562616630663534636134323266613435386137323537636637 Jan 13 23:43:47.066000 audit: BPF prog-id=221 op=UNLOAD Jan 13 23:43:47.066000 audit[4984]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:47.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264343562616630663534636134323266613435386137323537636637 Jan 13 23:43:47.066000 audit: BPF prog-id=220 op=UNLOAD Jan 13 23:43:47.066000 audit[4984]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:47.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264343562616630663534636134323266613435386137323537636637 Jan 13 23:43:47.066000 audit: BPF prog-id=222 op=LOAD Jan 13 23:43:47.066000 audit[4984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4972 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:47.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264343562616630663534636134323266613435386137323537636637 Jan 13 23:43:47.123990 containerd[1990]: time="2026-01-13T23:43:47.123925212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b8d4f74f9-kqpzz,Uid:3317981a-15b4-41f8-a3cf-26fbd9c6fbf1,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd45baf0f54ca422fa458a7257cf7a6cea361b18e107d7c0f27ac196d05eec98\"" Jan 13 23:43:47.191446 containerd[1990]: time="2026-01-13T23:43:47.191373816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:43:47.193680 containerd[1990]: time="2026-01-13T23:43:47.193506864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:43:47.193680 containerd[1990]: time="2026-01-13T23:43:47.193557852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:47.193968 kubelet[3562]: E0113 23:43:47.193871 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:43:47.193968 kubelet[3562]: E0113 23:43:47.193929 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:43:47.195120 kubelet[3562]: E0113 23:43:47.194971 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4qrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78b56f8d66-w4wmx_calico-system(d5bf06bf-e923-4a15-849e-51c08230b88e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:43:47.196114 containerd[1990]: time="2026-01-13T23:43:47.195980604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:43:47.197317 kubelet[3562]: E0113 23:43:47.197097 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b56f8d66-w4wmx" podUID="d5bf06bf-e923-4a15-849e-51c08230b88e" Jan 13 23:43:47.462569 containerd[1990]: time="2026-01-13T23:43:47.462308497Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:43:47.465469 containerd[1990]: time="2026-01-13T23:43:47.465311389Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:43:47.465469 containerd[1990]: time="2026-01-13T23:43:47.465379657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:47.465684 kubelet[3562]: E0113 23:43:47.465636 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:43:47.465775 kubelet[3562]: E0113 23:43:47.465694 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:43:47.465989 kubelet[3562]: E0113 23:43:47.465891 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csplk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b8d4f74f9-kqpzz_calico-system(3317981a-15b4-41f8-a3cf-26fbd9c6fbf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:43:47.467657 kubelet[3562]: E0113 23:43:47.467562 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" podUID="3317981a-15b4-41f8-a3cf-26fbd9c6fbf1" Jan 13 23:43:47.598863 containerd[1990]: time="2026-01-13T23:43:47.598685810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9dddc9f7-sd8kc,Uid:db70aac6-82d4-4ef8-98ae-1ad4091dd76e,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:43:47.599473 containerd[1990]: time="2026-01-13T23:43:47.598685546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc5d5895-g9wvb,Uid:d529d459-4c8c-4f5e-b8a4-f53690574272,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:43:47.599855 containerd[1990]: time="2026-01-13T23:43:47.599557922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zwrvg,Uid:86622233-a85f-41fd-b458-2112644e82b9,Namespace:calico-system,Attempt:0,}" Jan 13 23:43:47.648370 systemd-networkd[1570]: cali03e1dff9f78: Gained IPv6LL Jan 13 23:43:47.837387 systemd-networkd[1570]: vxlan.calico: Gained IPv6LL Jan 13 23:43:48.006200 systemd-networkd[1570]: calidc7f545cbc7: Link UP Jan 13 23:43:48.009913 systemd-networkd[1570]: calidc7f545cbc7: Gained carrier Jan 13 23:43:48.036108 kubelet[3562]: E0113 23:43:48.036024 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" podUID="3317981a-15b4-41f8-a3cf-26fbd9c6fbf1" Jan 13 23:43:48.043815 kubelet[3562]: E0113 23:43:48.043692 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b56f8d66-w4wmx" podUID="d5bf06bf-e923-4a15-849e-51c08230b88e" Jan 13 23:43:48.071172 containerd[1990]: 2026-01-13 23:43:47.768 [INFO][5017] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--81-k8s-calico--apiserver--7c9dddc9f7--sd8kc-eth0 calico-apiserver-7c9dddc9f7- calico-apiserver db70aac6-82d4-4ef8-98ae-1ad4091dd76e 849 0 2026-01-13 23:43:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c9dddc9f7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-81 calico-apiserver-7c9dddc9f7-sd8kc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidc7f545cbc7 [] [] }} ContainerID="d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" Namespace="calico-apiserver" Pod="calico-apiserver-7c9dddc9f7-sd8kc" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--7c9dddc9f7--sd8kc-" Jan 13 23:43:48.071172 containerd[1990]: 2026-01-13 23:43:47.768 [INFO][5017] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" Namespace="calico-apiserver" Pod="calico-apiserver-7c9dddc9f7-sd8kc" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--7c9dddc9f7--sd8kc-eth0" Jan 13 23:43:48.071172 containerd[1990]: 2026-01-13 23:43:47.865 [INFO][5044] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" HandleID="k8s-pod-network.d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" Workload="ip--172--31--22--81-k8s-calico--apiserver--7c9dddc9f7--sd8kc-eth0" Jan 13 23:43:48.071564 containerd[1990]: 2026-01-13 23:43:47.866 [INFO][5044] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" HandleID="k8s-pod-network.d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" Workload="ip--172--31--22--81-k8s-calico--apiserver--7c9dddc9f7--sd8kc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000231970), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-81", "pod":"calico-apiserver-7c9dddc9f7-sd8kc", "timestamp":"2026-01-13 23:43:47.865715799 +0000 UTC"}, Hostname:"ip-172-31-22-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:43:48.071564 containerd[1990]: 2026-01-13 23:43:47.866 [INFO][5044] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:43:48.071564 containerd[1990]: 2026-01-13 23:43:47.866 [INFO][5044] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:43:48.071564 containerd[1990]: 2026-01-13 23:43:47.866 [INFO][5044] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-81' Jan 13 23:43:48.071564 containerd[1990]: 2026-01-13 23:43:47.887 [INFO][5044] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" host="ip-172-31-22-81" Jan 13 23:43:48.071564 containerd[1990]: 2026-01-13 23:43:47.894 [INFO][5044] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-81" Jan 13 23:43:48.071564 containerd[1990]: 2026-01-13 23:43:47.912 [INFO][5044] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:48.071564 containerd[1990]: 2026-01-13 23:43:47.919 [INFO][5044] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:48.071564 containerd[1990]: 2026-01-13 23:43:47.925 [INFO][5044] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:48.072666 containerd[1990]: 2026-01-13 23:43:47.925 [INFO][5044] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" host="ip-172-31-22-81" Jan 13 23:43:48.072666 containerd[1990]: 2026-01-13 23:43:47.930 [INFO][5044] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf Jan 13 23:43:48.072666 containerd[1990]: 2026-01-13 23:43:47.952 [INFO][5044] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" host="ip-172-31-22-81" Jan 13 23:43:48.072666 containerd[1990]: 2026-01-13 23:43:47.971 [INFO][5044] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.131/26] block=192.168.127.128/26 handle="k8s-pod-network.d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" host="ip-172-31-22-81" Jan 13 23:43:48.072666 containerd[1990]: 2026-01-13 23:43:47.971 [INFO][5044] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.131/26] handle="k8s-pod-network.d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" host="ip-172-31-22-81" Jan 13 23:43:48.072666 containerd[1990]: 2026-01-13 23:43:47.971 [INFO][5044] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:43:48.072666 containerd[1990]: 2026-01-13 23:43:47.971 [INFO][5044] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.131/26] IPv6=[] ContainerID="d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" HandleID="k8s-pod-network.d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" Workload="ip--172--31--22--81-k8s-calico--apiserver--7c9dddc9f7--sd8kc-eth0" Jan 13 23:43:48.074272 containerd[1990]: 2026-01-13 23:43:47.983 [INFO][5017] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" Namespace="calico-apiserver" Pod="calico-apiserver-7c9dddc9f7-sd8kc" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--7c9dddc9f7--sd8kc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-calico--apiserver--7c9dddc9f7--sd8kc-eth0", GenerateName:"calico-apiserver-7c9dddc9f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"db70aac6-82d4-4ef8-98ae-1ad4091dd76e", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c9dddc9f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"", Pod:"calico-apiserver-7c9dddc9f7-sd8kc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc7f545cbc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:48.074437 containerd[1990]: 2026-01-13 23:43:47.983 [INFO][5017] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.131/32] ContainerID="d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" Namespace="calico-apiserver" Pod="calico-apiserver-7c9dddc9f7-sd8kc" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--7c9dddc9f7--sd8kc-eth0" Jan 13 23:43:48.074437 containerd[1990]: 2026-01-13 23:43:47.983 [INFO][5017] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc7f545cbc7 ContainerID="d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" Namespace="calico-apiserver" Pod="calico-apiserver-7c9dddc9f7-sd8kc" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--7c9dddc9f7--sd8kc-eth0" Jan 13 23:43:48.074437 containerd[1990]: 2026-01-13 23:43:48.014 [INFO][5017] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" Namespace="calico-apiserver" Pod="calico-apiserver-7c9dddc9f7-sd8kc" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--7c9dddc9f7--sd8kc-eth0" Jan 13 23:43:48.074574 containerd[1990]: 2026-01-13 23:43:48.024 [INFO][5017] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" Namespace="calico-apiserver" Pod="calico-apiserver-7c9dddc9f7-sd8kc" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--7c9dddc9f7--sd8kc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-calico--apiserver--7c9dddc9f7--sd8kc-eth0", GenerateName:"calico-apiserver-7c9dddc9f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"db70aac6-82d4-4ef8-98ae-1ad4091dd76e", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c9dddc9f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf", Pod:"calico-apiserver-7c9dddc9f7-sd8kc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc7f545cbc7", MAC:"86:fa:06:49:1d:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:48.074695 containerd[1990]: 2026-01-13 23:43:48.059 [INFO][5017] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" Namespace="calico-apiserver" Pod="calico-apiserver-7c9dddc9f7-sd8kc" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--7c9dddc9f7--sd8kc-eth0" Jan 13 23:43:48.183000 audit[5074]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=5074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:48.190768 kernel: kauditd_printk_skb: 253 callbacks suppressed Jan 13 23:43:48.190888 kernel: audit: type=1325 audit(1768347828.183:676): table=filter:129 family=2 entries=20 op=nft_register_rule pid=5074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:48.183000 audit[5074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd70b0290 a2=0 a3=1 items=0 ppid=3666 pid=5074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.206158 kernel: audit: type=1300 audit(1768347828.183:676): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd70b0290 a2=0 a3=1 items=0 ppid=3666 pid=5074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.183000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:48.215514 kernel: audit: type=1327 audit(1768347828.183:676): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:48.222582 containerd[1990]: time="2026-01-13T23:43:48.222523717Z" level=info msg="connecting to shim d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf" address="unix:///run/containerd/s/fd5261c2e309b1d07afd985826f558c2ea4a4a867d7677f220d99c9ee74a318a" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:43:48.219000 audit[5074]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=5074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:48.230298 systemd-networkd[1570]: cali703880400e3: Link UP Jan 13 23:43:48.219000 audit[5074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd70b0290 a2=0 a3=1 items=0 ppid=3666 pid=5074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.237910 kernel: audit: type=1325 audit(1768347828.219:677): table=nat:130 family=2 entries=14 op=nft_register_rule pid=5074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:48.238019 kernel: audit: type=1300 audit(1768347828.219:677): arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd70b0290 a2=0 a3=1 items=0 ppid=3666 pid=5074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.238210 systemd-networkd[1570]: cali703880400e3: Gained carrier Jan 13 23:43:48.252726 kernel: audit: type=1327 audit(1768347828.219:677): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:48.219000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:48.293515 containerd[1990]: 2026-01-13 23:43:47.857 [INFO][5026] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--81-k8s-goldmane--666569f655--zwrvg-eth0 goldmane-666569f655- calico-system 86622233-a85f-41fd-b458-2112644e82b9 848 0 2026-01-13 23:43:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-22-81 goldmane-666569f655-zwrvg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali703880400e3 [] [] }} ContainerID="643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" Namespace="calico-system" Pod="goldmane-666569f655-zwrvg" WorkloadEndpoint="ip--172--31--22--81-k8s-goldmane--666569f655--zwrvg-" Jan 13 23:43:48.293515 containerd[1990]: 2026-01-13 23:43:47.857 [INFO][5026] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" Namespace="calico-system" Pod="goldmane-666569f655-zwrvg" WorkloadEndpoint="ip--172--31--22--81-k8s-goldmane--666569f655--zwrvg-eth0" Jan 13 23:43:48.293515 containerd[1990]: 2026-01-13 23:43:47.965 [INFO][5052] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" HandleID="k8s-pod-network.643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" Workload="ip--172--31--22--81-k8s-goldmane--666569f655--zwrvg-eth0" Jan 13 23:43:48.293852 containerd[1990]: 2026-01-13 23:43:47.970 [INFO][5052] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" HandleID="k8s-pod-network.643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" Workload="ip--172--31--22--81-k8s-goldmane--666569f655--zwrvg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c340), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-81", "pod":"goldmane-666569f655-zwrvg", "timestamp":"2026-01-13 23:43:47.965166376 +0000 UTC"}, Hostname:"ip-172-31-22-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:43:48.293852 containerd[1990]: 2026-01-13 23:43:47.970 [INFO][5052] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:43:48.293852 containerd[1990]: 2026-01-13 23:43:47.972 [INFO][5052] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:43:48.293852 containerd[1990]: 2026-01-13 23:43:47.972 [INFO][5052] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-81' Jan 13 23:43:48.293852 containerd[1990]: 2026-01-13 23:43:48.008 [INFO][5052] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" host="ip-172-31-22-81" Jan 13 23:43:48.293852 containerd[1990]: 2026-01-13 23:43:48.039 [INFO][5052] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-81" Jan 13 23:43:48.293852 containerd[1990]: 2026-01-13 23:43:48.081 [INFO][5052] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:48.293852 containerd[1990]: 2026-01-13 23:43:48.101 [INFO][5052] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:48.293852 containerd[1990]: 2026-01-13 23:43:48.119 [INFO][5052] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:48.294357 containerd[1990]: 2026-01-13 23:43:48.120 [INFO][5052] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" host="ip-172-31-22-81" Jan 13 23:43:48.294357 containerd[1990]: 2026-01-13 23:43:48.131 [INFO][5052] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378 Jan 13 23:43:48.294357 containerd[1990]: 2026-01-13 23:43:48.168 [INFO][5052] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" host="ip-172-31-22-81" Jan 13 23:43:48.294357 containerd[1990]: 2026-01-13 23:43:48.187 [INFO][5052] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.132/26] block=192.168.127.128/26 handle="k8s-pod-network.643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" host="ip-172-31-22-81" Jan 13 23:43:48.294357 containerd[1990]: 2026-01-13 23:43:48.187 [INFO][5052] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.132/26] handle="k8s-pod-network.643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" host="ip-172-31-22-81" Jan 13 23:43:48.294357 containerd[1990]: 2026-01-13 23:43:48.187 [INFO][5052] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:43:48.294357 containerd[1990]: 2026-01-13 23:43:48.187 [INFO][5052] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.132/26] IPv6=[] ContainerID="643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" HandleID="k8s-pod-network.643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" Workload="ip--172--31--22--81-k8s-goldmane--666569f655--zwrvg-eth0" Jan 13 23:43:48.294685 containerd[1990]: 2026-01-13 23:43:48.216 [INFO][5026] cni-plugin/k8s.go 418: Populated endpoint ContainerID="643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" Namespace="calico-system" Pod="goldmane-666569f655-zwrvg" WorkloadEndpoint="ip--172--31--22--81-k8s-goldmane--666569f655--zwrvg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-goldmane--666569f655--zwrvg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"86622233-a85f-41fd-b458-2112644e82b9", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"", Pod:"goldmane-666569f655-zwrvg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali703880400e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:48.294685 containerd[1990]: 2026-01-13 23:43:48.216 [INFO][5026] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.132/32] ContainerID="643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" Namespace="calico-system" Pod="goldmane-666569f655-zwrvg" WorkloadEndpoint="ip--172--31--22--81-k8s-goldmane--666569f655--zwrvg-eth0" Jan 13 23:43:48.294864 containerd[1990]: 2026-01-13 23:43:48.216 [INFO][5026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali703880400e3 ContainerID="643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" Namespace="calico-system" Pod="goldmane-666569f655-zwrvg" WorkloadEndpoint="ip--172--31--22--81-k8s-goldmane--666569f655--zwrvg-eth0" Jan 13 23:43:48.294864 containerd[1990]: 2026-01-13 23:43:48.231 [INFO][5026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" Namespace="calico-system" Pod="goldmane-666569f655-zwrvg" WorkloadEndpoint="ip--172--31--22--81-k8s-goldmane--666569f655--zwrvg-eth0" Jan 13 23:43:48.294978 containerd[1990]: 2026-01-13 23:43:48.232 [INFO][5026] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" Namespace="calico-system" Pod="goldmane-666569f655-zwrvg" WorkloadEndpoint="ip--172--31--22--81-k8s-goldmane--666569f655--zwrvg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-goldmane--666569f655--zwrvg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"86622233-a85f-41fd-b458-2112644e82b9", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378", Pod:"goldmane-666569f655-zwrvg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali703880400e3", MAC:"26:26:24:31:ef:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:48.295113 containerd[1990]: 2026-01-13 23:43:48.286 [INFO][5026] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" Namespace="calico-system" Pod="goldmane-666569f655-zwrvg" WorkloadEndpoint="ip--172--31--22--81-k8s-goldmane--666569f655--zwrvg-eth0" Jan 13 23:43:48.331543 systemd[1]: Started cri-containerd-d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf.scope - libcontainer container d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf. Jan 13 23:43:48.400198 containerd[1990]: time="2026-01-13T23:43:48.399387506Z" level=info msg="connecting to shim 643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378" address="unix:///run/containerd/s/fbbfdbf529ca6153806dad825784f1d661d3210fcaacaa382ed1ef684112da90" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:43:48.431492 systemd-networkd[1570]: cali11c39e212e5: Link UP Jan 13 23:43:48.433566 systemd-networkd[1570]: cali11c39e212e5: Gained carrier Jan 13 23:43:48.470000 audit[5121]: NETFILTER_CFG table=filter:131 family=2 entries=54 op=nft_register_chain pid=5121 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:43:48.505232 kernel: audit: type=1325 audit(1768347828.470:678): table=filter:131 family=2 entries=54 op=nft_register_chain pid=5121 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:43:48.470000 audit[5121]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=ffffc10374e0 a2=0 a3=ffffa9ff1fa8 items=0 ppid=4696 pid=5121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.470000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:43:48.526566 kernel: audit: type=1300 audit(1768347828.470:678): arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=ffffc10374e0 a2=0 a3=ffffa9ff1fa8 items=0 ppid=4696 pid=5121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.526714 kernel: audit: type=1327 audit(1768347828.470:678): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:43:48.528155 containerd[1990]: 2026-01-13 23:43:47.924 [INFO][5012] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--g9wvb-eth0 calico-apiserver-6bc5d5895- calico-apiserver d529d459-4c8c-4f5e-b8a4-f53690574272 841 0 2026-01-13 23:43:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bc5d5895 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-81 calico-apiserver-6bc5d5895-g9wvb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali11c39e212e5 [] [] }} ContainerID="8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" Namespace="calico-apiserver" Pod="calico-apiserver-6bc5d5895-g9wvb" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--g9wvb-" Jan 13 23:43:48.528155 containerd[1990]: 2026-01-13 23:43:47.925 [INFO][5012] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" Namespace="calico-apiserver" Pod="calico-apiserver-6bc5d5895-g9wvb" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--g9wvb-eth0" Jan 13 23:43:48.528155 containerd[1990]: 2026-01-13 23:43:48.123 [INFO][5061] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" HandleID="k8s-pod-network.8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" Workload="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--g9wvb-eth0" Jan 13 23:43:48.528874 containerd[1990]: 2026-01-13 23:43:48.125 [INFO][5061] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" HandleID="k8s-pod-network.8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" Workload="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--g9wvb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ca770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-81", "pod":"calico-apiserver-6bc5d5895-g9wvb", "timestamp":"2026-01-13 23:43:48.123936505 +0000 UTC"}, Hostname:"ip-172-31-22-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:43:48.528874 containerd[1990]: 2026-01-13 23:43:48.126 [INFO][5061] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:43:48.528874 containerd[1990]: 2026-01-13 23:43:48.188 [INFO][5061] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:43:48.528874 containerd[1990]: 2026-01-13 23:43:48.188 [INFO][5061] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-81' Jan 13 23:43:48.528874 containerd[1990]: 2026-01-13 23:43:48.278 [INFO][5061] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" host="ip-172-31-22-81" Jan 13 23:43:48.528874 containerd[1990]: 2026-01-13 23:43:48.310 [INFO][5061] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-81" Jan 13 23:43:48.528874 containerd[1990]: 2026-01-13 23:43:48.336 [INFO][5061] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:48.528874 containerd[1990]: 2026-01-13 23:43:48.343 [INFO][5061] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:48.528874 containerd[1990]: 2026-01-13 23:43:48.351 [INFO][5061] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:48.531483 containerd[1990]: 2026-01-13 23:43:48.352 [INFO][5061] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" host="ip-172-31-22-81" Jan 13 23:43:48.531483 containerd[1990]: 2026-01-13 23:43:48.357 [INFO][5061] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11 Jan 13 23:43:48.531483 containerd[1990]: 2026-01-13 23:43:48.369 [INFO][5061] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" host="ip-172-31-22-81" Jan 13 23:43:48.531483 containerd[1990]: 2026-01-13 23:43:48.394 [INFO][5061] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.133/26] block=192.168.127.128/26 handle="k8s-pod-network.8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" host="ip-172-31-22-81" Jan 13 23:43:48.531483 containerd[1990]: 2026-01-13 23:43:48.394 [INFO][5061] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.133/26] handle="k8s-pod-network.8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" host="ip-172-31-22-81" Jan 13 23:43:48.531483 containerd[1990]: 2026-01-13 23:43:48.394 [INFO][5061] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:43:48.531483 containerd[1990]: 2026-01-13 23:43:48.394 [INFO][5061] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.133/26] IPv6=[] ContainerID="8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" HandleID="k8s-pod-network.8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" Workload="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--g9wvb-eth0" Jan 13 23:43:48.531827 containerd[1990]: 2026-01-13 23:43:48.415 [INFO][5012] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" Namespace="calico-apiserver" Pod="calico-apiserver-6bc5d5895-g9wvb" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--g9wvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--g9wvb-eth0", GenerateName:"calico-apiserver-6bc5d5895-", Namespace:"calico-apiserver", SelfLink:"", UID:"d529d459-4c8c-4f5e-b8a4-f53690574272", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bc5d5895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"", Pod:"calico-apiserver-6bc5d5895-g9wvb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11c39e212e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:48.531969 containerd[1990]: 2026-01-13 23:43:48.415 [INFO][5012] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.133/32] ContainerID="8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" Namespace="calico-apiserver" Pod="calico-apiserver-6bc5d5895-g9wvb" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--g9wvb-eth0" Jan 13 23:43:48.531969 containerd[1990]: 2026-01-13 23:43:48.415 [INFO][5012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11c39e212e5 ContainerID="8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" Namespace="calico-apiserver" Pod="calico-apiserver-6bc5d5895-g9wvb" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--g9wvb-eth0" Jan 13 23:43:48.531969 containerd[1990]: 2026-01-13 23:43:48.432 [INFO][5012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" Namespace="calico-apiserver" Pod="calico-apiserver-6bc5d5895-g9wvb" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--g9wvb-eth0" Jan 13 23:43:48.534177 containerd[1990]: 2026-01-13 23:43:48.435 [INFO][5012] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" Namespace="calico-apiserver" Pod="calico-apiserver-6bc5d5895-g9wvb" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--g9wvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--g9wvb-eth0", GenerateName:"calico-apiserver-6bc5d5895-", Namespace:"calico-apiserver", SelfLink:"", UID:"d529d459-4c8c-4f5e-b8a4-f53690574272", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bc5d5895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11", Pod:"calico-apiserver-6bc5d5895-g9wvb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11c39e212e5", MAC:"62:b2:7d:3f:9e:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:48.534367 containerd[1990]: 2026-01-13 23:43:48.476 [INFO][5012] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" Namespace="calico-apiserver" Pod="calico-apiserver-6bc5d5895-g9wvb" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--g9wvb-eth0" Jan 13 23:43:48.560544 systemd[1]: Started cri-containerd-643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378.scope - libcontainer container 643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378. Jan 13 23:43:48.577000 audit: BPF prog-id=223 op=LOAD Jan 13 23:43:48.582174 kernel: audit: type=1334 audit(1768347828.577:679): prog-id=223 op=LOAD Jan 13 23:43:48.583000 audit: BPF prog-id=224 op=LOAD Jan 13 23:43:48.583000 audit[5096]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c180 a2=98 a3=0 items=0 ppid=5084 pid=5096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439656632643763663139643765653533303530316130343038396265 Jan 13 23:43:48.583000 audit: BPF prog-id=224 op=UNLOAD Jan 13 23:43:48.583000 audit[5096]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5084 pid=5096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439656632643763663139643765653533303530316130343038396265 Jan 13 23:43:48.585000 audit: BPF prog-id=225 op=LOAD Jan 13 23:43:48.585000 audit[5096]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c3e8 a2=98 a3=0 items=0 ppid=5084 pid=5096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439656632643763663139643765653533303530316130343038396265 Jan 13 23:43:48.586000 audit: BPF prog-id=226 op=LOAD Jan 13 23:43:48.586000 audit[5096]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400018c168 a2=98 a3=0 items=0 ppid=5084 pid=5096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439656632643763663139643765653533303530316130343038396265 Jan 13 23:43:48.587000 audit: BPF prog-id=226 op=UNLOAD Jan 13 23:43:48.587000 audit[5096]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5084 pid=5096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439656632643763663139643765653533303530316130343038396265 Jan 13 23:43:48.587000 audit: BPF prog-id=225 op=UNLOAD Jan 13 23:43:48.587000 audit[5096]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5084 pid=5096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439656632643763663139643765653533303530316130343038396265 Jan 13 23:43:48.587000 audit: BPF prog-id=227 op=LOAD Jan 13 23:43:48.587000 audit[5096]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c648 a2=98 a3=0 items=0 ppid=5084 pid=5096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439656632643763663139643765653533303530316130343038396265 Jan 13 23:43:48.615157 containerd[1990]: time="2026-01-13T23:43:48.613670775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pkh58,Uid:ffeb255f-8b45-4b39-ab1a-757accecb002,Namespace:kube-system,Attempt:0,}" Jan 13 23:43:48.620198 containerd[1990]: time="2026-01-13T23:43:48.618093099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lblk8,Uid:884e12a9-b4d3-4695-bc91-5cdf1a464d0b,Namespace:calico-system,Attempt:0,}" Jan 13 23:43:48.732455 systemd-networkd[1570]: cali56b756c4269: Gained IPv6LL Jan 13 23:43:48.761753 containerd[1990]: time="2026-01-13T23:43:48.761498932Z" level=info msg="connecting to shim 8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11" address="unix:///run/containerd/s/9b6e7a6d027285ebcfc5ce299ec5605a26c15b0f885ec20ca5b1776abb1002b6" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:43:48.915000 audit[5199]: NETFILTER_CFG table=filter:132 family=2 entries=81 op=nft_register_chain pid=5199 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:43:48.915000 audit[5199]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=46544 a0=3 a1=ffffe4d701a0 a2=0 a3=ffff98f3cfa8 items=0 ppid=4696 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.915000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:43:48.932000 audit: BPF prog-id=228 op=LOAD Jan 13 23:43:48.936000 audit: BPF prog-id=229 op=LOAD Jan 13 23:43:48.936000 audit[5144]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=5131 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634336435343030643730646265393130323765363866373736396335 Jan 13 23:43:48.937000 audit: BPF prog-id=229 op=UNLOAD Jan 13 23:43:48.937000 audit[5144]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5131 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634336435343030643730646265393130323765363866373736396335 Jan 13 23:43:48.939000 audit: BPF prog-id=230 op=LOAD Jan 13 23:43:48.939000 audit[5144]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=5131 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634336435343030643730646265393130323765363866373736396335 Jan 13 23:43:48.941000 audit: BPF prog-id=231 op=LOAD Jan 13 23:43:48.941000 audit[5144]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=5131 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634336435343030643730646265393130323765363866373736396335 Jan 13 23:43:48.945000 audit: BPF prog-id=231 op=UNLOAD Jan 13 23:43:48.945000 audit[5144]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5131 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634336435343030643730646265393130323765363866373736396335 Jan 13 23:43:48.948000 audit: BPF prog-id=230 op=UNLOAD Jan 13 23:43:48.948000 audit[5144]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5131 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634336435343030643730646265393130323765363866373736396335 Jan 13 23:43:48.959000 audit: BPF prog-id=232 op=LOAD Jan 13 23:43:48.959000 audit[5144]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=5131 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:48.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634336435343030643730646265393130323765363866373736396335 Jan 13 23:43:48.971768 systemd[1]: Started cri-containerd-8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11.scope - libcontainer container 8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11. Jan 13 23:43:49.061629 containerd[1990]: time="2026-01-13T23:43:49.060825829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9dddc9f7-sd8kc,Uid:db70aac6-82d4-4ef8-98ae-1ad4091dd76e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d9ef2d7cf19d7ee530501a04089bef0e01786100e31ec7e02913e748246ff9bf\"" Jan 13 23:43:49.080251 kubelet[3562]: E0113 23:43:49.080164 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" podUID="3317981a-15b4-41f8-a3cf-26fbd9c6fbf1" Jan 13 23:43:49.090770 containerd[1990]: time="2026-01-13T23:43:49.089957821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:43:49.149000 audit: BPF prog-id=233 op=LOAD Jan 13 23:43:49.153000 audit: BPF prog-id=234 op=LOAD Jan 13 23:43:49.153000 audit[5215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5192 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863623834336434653638313464663536303064373861656637343938 Jan 13 23:43:49.154000 audit: BPF prog-id=234 op=UNLOAD Jan 13 23:43:49.154000 audit[5215]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5192 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863623834336434653638313464663536303064373861656637343938 Jan 13 23:43:49.154000 audit: BPF prog-id=235 op=LOAD Jan 13 23:43:49.154000 audit[5215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5192 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863623834336434653638313464663536303064373861656637343938 Jan 13 23:43:49.154000 audit: BPF prog-id=236 op=LOAD Jan 13 23:43:49.154000 audit[5215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5192 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863623834336434653638313464663536303064373861656637343938 Jan 13 23:43:49.154000 audit: BPF prog-id=236 op=UNLOAD Jan 13 23:43:49.154000 audit[5215]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5192 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863623834336434653638313464663536303064373861656637343938 Jan 13 23:43:49.154000 audit: BPF prog-id=235 op=UNLOAD Jan 13 23:43:49.154000 audit[5215]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5192 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863623834336434653638313464663536303064373861656637343938 Jan 13 23:43:49.154000 audit: BPF prog-id=237 op=LOAD Jan 13 23:43:49.154000 audit[5215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5192 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863623834336434653638313464663536303064373861656637343938 Jan 13 23:43:49.250865 containerd[1990]: time="2026-01-13T23:43:49.250281794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zwrvg,Uid:86622233-a85f-41fd-b458-2112644e82b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"643d5400d70dbe91027e68f7769c51ea75d5bbc9931de3e88defc55d80ee7378\"" Jan 13 23:43:49.294814 containerd[1990]: time="2026-01-13T23:43:49.294458978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc5d5895-g9wvb,Uid:d529d459-4c8c-4f5e-b8a4-f53690574272,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8cb843d4e6814df5600d78aef74980b7cddd47f532ce56abb09fbe397efb0b11\"" Jan 13 23:43:49.361969 systemd-networkd[1570]: calid52787ae101: Link UP Jan 13 23:43:49.363513 systemd-networkd[1570]: calid52787ae101: Gained carrier Jan 13 23:43:49.387866 containerd[1990]: time="2026-01-13T23:43:49.387362895Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:43:49.400481 containerd[1990]: time="2026-01-13T23:43:49.400397331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:49.402155 containerd[1990]: time="2026-01-13T23:43:49.401155983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:43:49.403546 kubelet[3562]: E0113 23:43:49.403271 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:43:49.403546 kubelet[3562]: E0113 23:43:49.403349 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:43:49.411056 kubelet[3562]: E0113 23:43:49.406348 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c9dddc9f7-sd8kc_calico-apiserver(db70aac6-82d4-4ef8-98ae-1ad4091dd76e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:43:49.411056 kubelet[3562]: E0113 23:43:49.407985 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" podUID="db70aac6-82d4-4ef8-98ae-1ad4091dd76e" Jan 13 23:43:49.413157 containerd[1990]: time="2026-01-13T23:43:49.413036799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:43:49.421072 containerd[1990]: 2026-01-13 23:43:48.893 [INFO][5194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--81-k8s-csi--node--driver--lblk8-eth0 csi-node-driver- calico-system 884e12a9-b4d3-4695-bc91-5cdf1a464d0b 734 0 2026-01-13 23:43:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-22-81 csi-node-driver-lblk8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid52787ae101 [] [] }} ContainerID="bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" Namespace="calico-system" Pod="csi-node-driver-lblk8" WorkloadEndpoint="ip--172--31--22--81-k8s-csi--node--driver--lblk8-" Jan 13 23:43:49.421072 containerd[1990]: 2026-01-13 23:43:48.894 [INFO][5194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" Namespace="calico-system" Pod="csi-node-driver-lblk8" WorkloadEndpoint="ip--172--31--22--81-k8s-csi--node--driver--lblk8-eth0" Jan 13 23:43:49.421072 containerd[1990]: 2026-01-13 23:43:49.220 [INFO][5225] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" HandleID="k8s-pod-network.bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" Workload="ip--172--31--22--81-k8s-csi--node--driver--lblk8-eth0" Jan 13 23:43:49.421750 containerd[1990]: 2026-01-13 23:43:49.220 [INFO][5225] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" HandleID="k8s-pod-network.bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" Workload="ip--172--31--22--81-k8s-csi--node--driver--lblk8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003916c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-81", "pod":"csi-node-driver-lblk8", "timestamp":"2026-01-13 23:43:49.220559846 +0000 UTC"}, Hostname:"ip-172-31-22-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:43:49.421750 containerd[1990]: 2026-01-13 23:43:49.220 [INFO][5225] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:43:49.421750 containerd[1990]: 2026-01-13 23:43:49.221 [INFO][5225] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:43:49.421750 containerd[1990]: 2026-01-13 23:43:49.221 [INFO][5225] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-81' Jan 13 23:43:49.421750 containerd[1990]: 2026-01-13 23:43:49.266 [INFO][5225] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" host="ip-172-31-22-81" Jan 13 23:43:49.421750 containerd[1990]: 2026-01-13 23:43:49.281 [INFO][5225] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-81" Jan 13 23:43:49.421750 containerd[1990]: 2026-01-13 23:43:49.299 [INFO][5225] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:49.421750 containerd[1990]: 2026-01-13 23:43:49.307 [INFO][5225] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:49.421750 containerd[1990]: 2026-01-13 23:43:49.315 [INFO][5225] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:49.422863 containerd[1990]: 2026-01-13 23:43:49.315 [INFO][5225] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" host="ip-172-31-22-81" Jan 13 23:43:49.422863 containerd[1990]: 2026-01-13 23:43:49.321 [INFO][5225] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61 Jan 13 23:43:49.422863 containerd[1990]: 2026-01-13 23:43:49.329 [INFO][5225] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" host="ip-172-31-22-81" Jan 13 23:43:49.422863 containerd[1990]: 2026-01-13 23:43:49.343 [INFO][5225] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.134/26] block=192.168.127.128/26 handle="k8s-pod-network.bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" host="ip-172-31-22-81" Jan 13 23:43:49.422863 containerd[1990]: 2026-01-13 23:43:49.344 [INFO][5225] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.134/26] handle="k8s-pod-network.bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" host="ip-172-31-22-81" Jan 13 23:43:49.422863 containerd[1990]: 2026-01-13 23:43:49.344 [INFO][5225] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:43:49.422863 containerd[1990]: 2026-01-13 23:43:49.345 [INFO][5225] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.134/26] IPv6=[] ContainerID="bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" HandleID="k8s-pod-network.bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" Workload="ip--172--31--22--81-k8s-csi--node--driver--lblk8-eth0" Jan 13 23:43:49.424633 containerd[1990]: 2026-01-13 23:43:49.352 [INFO][5194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" Namespace="calico-system" Pod="csi-node-driver-lblk8" WorkloadEndpoint="ip--172--31--22--81-k8s-csi--node--driver--lblk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-csi--node--driver--lblk8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"884e12a9-b4d3-4695-bc91-5cdf1a464d0b", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"", Pod:"csi-node-driver-lblk8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid52787ae101", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:49.424870 containerd[1990]: 2026-01-13 23:43:49.353 [INFO][5194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.134/32] ContainerID="bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" Namespace="calico-system" Pod="csi-node-driver-lblk8" WorkloadEndpoint="ip--172--31--22--81-k8s-csi--node--driver--lblk8-eth0" Jan 13 23:43:49.424870 containerd[1990]: 2026-01-13 23:43:49.353 [INFO][5194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid52787ae101 ContainerID="bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" Namespace="calico-system" Pod="csi-node-driver-lblk8" WorkloadEndpoint="ip--172--31--22--81-k8s-csi--node--driver--lblk8-eth0" Jan 13 23:43:49.424870 containerd[1990]: 2026-01-13 23:43:49.364 [INFO][5194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" Namespace="calico-system" Pod="csi-node-driver-lblk8" WorkloadEndpoint="ip--172--31--22--81-k8s-csi--node--driver--lblk8-eth0" Jan 13 23:43:49.425047 containerd[1990]: 2026-01-13 23:43:49.367 [INFO][5194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" Namespace="calico-system" Pod="csi-node-driver-lblk8" WorkloadEndpoint="ip--172--31--22--81-k8s-csi--node--driver--lblk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-csi--node--driver--lblk8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"884e12a9-b4d3-4695-bc91-5cdf1a464d0b", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61", Pod:"csi-node-driver-lblk8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid52787ae101", MAC:"86:aa:31:e6:ce:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:49.426615 containerd[1990]: 2026-01-13 23:43:49.404 [INFO][5194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" Namespace="calico-system" Pod="csi-node-driver-lblk8" WorkloadEndpoint="ip--172--31--22--81-k8s-csi--node--driver--lblk8-eth0" Jan 13 23:43:49.436490 systemd-networkd[1570]: calidc7f545cbc7: Gained IPv6LL Jan 13 23:43:49.499060 containerd[1990]: time="2026-01-13T23:43:49.498653307Z" level=info msg="connecting to shim bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61" address="unix:///run/containerd/s/df9a77e6b01328be5bc6940b39e1a6d2d9c2d64db8b37c65d85970ae9f8b11d8" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:43:49.517707 systemd-networkd[1570]: cali32b04d5117d: Link UP Jan 13 23:43:49.519503 systemd-networkd[1570]: cali32b04d5117d: Gained carrier Jan 13 23:43:49.538000 audit[5300]: NETFILTER_CFG table=filter:133 family=2 entries=58 op=nft_register_chain pid=5300 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:43:49.538000 audit[5300]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27180 a0=3 a1=ffffe69f8360 a2=0 a3=ffff9237ffa8 items=0 ppid=4696 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.538000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:43:49.565427 containerd[1990]: 2026-01-13 23:43:49.115 [INFO][5178] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--81-k8s-coredns--668d6bf9bc--pkh58-eth0 coredns-668d6bf9bc- kube-system ffeb255f-8b45-4b39-ab1a-757accecb002 837 0 2026-01-13 23:42:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-81 coredns-668d6bf9bc-pkh58 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali32b04d5117d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" Namespace="kube-system" Pod="coredns-668d6bf9bc-pkh58" WorkloadEndpoint="ip--172--31--22--81-k8s-coredns--668d6bf9bc--pkh58-" Jan 13 23:43:49.565427 containerd[1990]: 2026-01-13 23:43:49.115 [INFO][5178] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" Namespace="kube-system" Pod="coredns-668d6bf9bc-pkh58" WorkloadEndpoint="ip--172--31--22--81-k8s-coredns--668d6bf9bc--pkh58-eth0" Jan 13 23:43:49.565427 containerd[1990]: 2026-01-13 23:43:49.309 [INFO][5254] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" HandleID="k8s-pod-network.e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" Workload="ip--172--31--22--81-k8s-coredns--668d6bf9bc--pkh58-eth0" Jan 13 23:43:49.566706 containerd[1990]: 2026-01-13 23:43:49.310 [INFO][5254] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" HandleID="k8s-pod-network.e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" Workload="ip--172--31--22--81-k8s-coredns--668d6bf9bc--pkh58-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330440), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-81", "pod":"coredns-668d6bf9bc-pkh58", "timestamp":"2026-01-13 23:43:49.309709875 +0000 UTC"}, Hostname:"ip-172-31-22-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:43:49.566706 containerd[1990]: 2026-01-13 23:43:49.310 [INFO][5254] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:43:49.566706 containerd[1990]: 2026-01-13 23:43:49.345 [INFO][5254] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:43:49.566706 containerd[1990]: 2026-01-13 23:43:49.345 [INFO][5254] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-81' Jan 13 23:43:49.566706 containerd[1990]: 2026-01-13 23:43:49.381 [INFO][5254] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" host="ip-172-31-22-81" Jan 13 23:43:49.566706 containerd[1990]: 2026-01-13 23:43:49.418 [INFO][5254] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-81" Jan 13 23:43:49.566706 containerd[1990]: 2026-01-13 23:43:49.433 [INFO][5254] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:49.566706 containerd[1990]: 2026-01-13 23:43:49.440 [INFO][5254] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:49.566706 containerd[1990]: 2026-01-13 23:43:49.451 [INFO][5254] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:49.569668 containerd[1990]: 2026-01-13 23:43:49.451 [INFO][5254] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" host="ip-172-31-22-81" Jan 13 23:43:49.569668 containerd[1990]: 2026-01-13 23:43:49.454 [INFO][5254] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701 Jan 13 23:43:49.569668 containerd[1990]: 2026-01-13 23:43:49.466 [INFO][5254] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" host="ip-172-31-22-81" Jan 13 23:43:49.569668 containerd[1990]: 2026-01-13 23:43:49.499 [INFO][5254] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.135/26] block=192.168.127.128/26 handle="k8s-pod-network.e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" host="ip-172-31-22-81" Jan 13 23:43:49.569668 containerd[1990]: 2026-01-13 23:43:49.500 [INFO][5254] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.135/26] handle="k8s-pod-network.e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" host="ip-172-31-22-81" Jan 13 23:43:49.569668 containerd[1990]: 2026-01-13 23:43:49.500 [INFO][5254] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:43:49.569668 containerd[1990]: 2026-01-13 23:43:49.504 [INFO][5254] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.135/26] IPv6=[] ContainerID="e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" HandleID="k8s-pod-network.e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" Workload="ip--172--31--22--81-k8s-coredns--668d6bf9bc--pkh58-eth0" Jan 13 23:43:49.571214 containerd[1990]: 2026-01-13 23:43:49.510 [INFO][5178] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" Namespace="kube-system" Pod="coredns-668d6bf9bc-pkh58" WorkloadEndpoint="ip--172--31--22--81-k8s-coredns--668d6bf9bc--pkh58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-coredns--668d6bf9bc--pkh58-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ffeb255f-8b45-4b39-ab1a-757accecb002", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 42, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"", Pod:"coredns-668d6bf9bc-pkh58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32b04d5117d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:49.571214 containerd[1990]: 2026-01-13 23:43:49.510 [INFO][5178] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.135/32] ContainerID="e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" Namespace="kube-system" Pod="coredns-668d6bf9bc-pkh58" WorkloadEndpoint="ip--172--31--22--81-k8s-coredns--668d6bf9bc--pkh58-eth0" Jan 13 23:43:49.571214 containerd[1990]: 2026-01-13 23:43:49.510 [INFO][5178] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32b04d5117d ContainerID="e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" Namespace="kube-system" Pod="coredns-668d6bf9bc-pkh58" WorkloadEndpoint="ip--172--31--22--81-k8s-coredns--668d6bf9bc--pkh58-eth0" Jan 13 23:43:49.571214 containerd[1990]: 2026-01-13 23:43:49.519 [INFO][5178] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" Namespace="kube-system" Pod="coredns-668d6bf9bc-pkh58" WorkloadEndpoint="ip--172--31--22--81-k8s-coredns--668d6bf9bc--pkh58-eth0" Jan 13 23:43:49.571214 containerd[1990]: 2026-01-13 23:43:49.523 [INFO][5178] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" Namespace="kube-system" Pod="coredns-668d6bf9bc-pkh58" WorkloadEndpoint="ip--172--31--22--81-k8s-coredns--668d6bf9bc--pkh58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-coredns--668d6bf9bc--pkh58-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ffeb255f-8b45-4b39-ab1a-757accecb002", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 42, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701", Pod:"coredns-668d6bf9bc-pkh58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32b04d5117d", MAC:"ce:98:63:d3:05:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:49.571214 containerd[1990]: 2026-01-13 23:43:49.556 [INFO][5178] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" Namespace="kube-system" Pod="coredns-668d6bf9bc-pkh58" WorkloadEndpoint="ip--172--31--22--81-k8s-coredns--668d6bf9bc--pkh58-eth0" Jan 13 23:43:49.602672 containerd[1990]: time="2026-01-13T23:43:49.602581600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc5d5895-dvsmb,Uid:7796067b-5cab-42e9-af9d-320bb4208060,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:43:49.631739 systemd-networkd[1570]: cali703880400e3: Gained IPv6LL Jan 13 23:43:49.677609 systemd[1]: Started cri-containerd-bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61.scope - libcontainer container bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61. Jan 13 23:43:49.727733 containerd[1990]: time="2026-01-13T23:43:49.726194093Z" level=info msg="connecting to shim e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701" address="unix:///run/containerd/s/7bfbd543186fd99d1cec46d7894d4cfcb2866337869a001b5b8b17a2e001d15e" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:43:49.736567 containerd[1990]: time="2026-01-13T23:43:49.736486493Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:43:49.739150 containerd[1990]: time="2026-01-13T23:43:49.739025069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:43:49.740609 containerd[1990]: time="2026-01-13T23:43:49.740403629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:49.741120 kubelet[3562]: E0113 23:43:49.741054 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:43:49.741552 kubelet[3562]: E0113 23:43:49.741481 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:43:49.742603 containerd[1990]: time="2026-01-13T23:43:49.742426169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:43:49.742859 kubelet[3562]: E0113 23:43:49.742103 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmwts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zwrvg_calico-system(86622233-a85f-41fd-b458-2112644e82b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:43:49.744909 kubelet[3562]: E0113 23:43:49.744249 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zwrvg" podUID="86622233-a85f-41fd-b458-2112644e82b9" Jan 13 23:43:49.848105 systemd[1]: Started cri-containerd-e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701.scope - libcontainer container e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701. Jan 13 23:43:49.942000 audit: BPF prog-id=238 op=LOAD Jan 13 23:43:49.943000 audit: BPF prog-id=239 op=LOAD Jan 13 23:43:49.943000 audit[5350]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5335 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532326435376230346235326238333735626461633130653132313738 Jan 13 23:43:49.943000 audit: BPF prog-id=239 op=UNLOAD Jan 13 23:43:49.943000 audit[5350]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5335 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532326435376230346235326238333735626461633130653132313738 Jan 13 23:43:49.943000 audit: BPF prog-id=240 op=LOAD Jan 13 23:43:49.943000 audit[5350]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5335 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532326435376230346235326238333735626461633130653132313738 Jan 13 23:43:49.944000 audit: BPF prog-id=241 op=LOAD Jan 13 23:43:49.944000 audit[5350]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5335 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532326435376230346235326238333735626461633130653132313738 Jan 13 23:43:49.944000 audit: BPF prog-id=241 op=UNLOAD Jan 13 23:43:49.944000 audit[5350]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5335 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532326435376230346235326238333735626461633130653132313738 Jan 13 23:43:49.944000 audit: BPF prog-id=240 op=UNLOAD Jan 13 23:43:49.944000 audit[5350]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5335 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532326435376230346235326238333735626461633130653132313738 Jan 13 23:43:49.944000 audit: BPF prog-id=242 op=LOAD Jan 13 23:43:49.944000 audit[5350]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5335 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532326435376230346235326238333735626461633130653132313738 Jan 13 23:43:49.973000 audit[5366]: NETFILTER_CFG table=filter:134 family=2 entries=58 op=nft_register_chain pid=5366 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:43:49.984000 audit: BPF prog-id=243 op=LOAD Jan 13 23:43:49.987000 audit: BPF prog-id=244 op=LOAD Jan 13 23:43:49.987000 audit[5303]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5289 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263663638323261396537626333303630313931613363643330373934 Jan 13 23:43:49.973000 audit[5366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27288 a0=3 a1=fffffe98d550 a2=0 a3=ffffadecffa8 items=0 ppid=4696 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.973000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:43:49.988000 audit: BPF prog-id=244 op=UNLOAD Jan 13 23:43:49.988000 audit[5303]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5289 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263663638323261396537626333303630313931613363643330373934 Jan 13 23:43:49.988000 audit: BPF prog-id=245 op=LOAD Jan 13 23:43:49.988000 audit[5303]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5289 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263663638323261396537626333303630313931613363643330373934 Jan 13 23:43:49.990000 audit: BPF prog-id=246 op=LOAD Jan 13 23:43:49.990000 audit[5303]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5289 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263663638323261396537626333303630313931613363643330373934 Jan 13 23:43:49.991000 audit: BPF prog-id=246 op=UNLOAD Jan 13 23:43:49.991000 audit[5303]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5289 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.991000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263663638323261396537626333303630313931613363643330373934 Jan 13 23:43:49.991000 audit: BPF prog-id=245 op=UNLOAD Jan 13 23:43:49.991000 audit[5303]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5289 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.991000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263663638323261396537626333303630313931613363643330373934 Jan 13 23:43:49.992000 audit: BPF prog-id=247 op=LOAD Jan 13 23:43:49.992000 audit[5303]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5289 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:49.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263663638323261396537626333303630313931613363643330373934 Jan 13 23:43:50.040651 containerd[1990]: time="2026-01-13T23:43:50.040576358Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:43:50.044251 containerd[1990]: time="2026-01-13T23:43:50.044076626Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:43:50.044380 containerd[1990]: time="2026-01-13T23:43:50.044275994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:50.046089 kubelet[3562]: E0113 23:43:50.044596 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:43:50.046089 kubelet[3562]: E0113 23:43:50.044660 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:43:50.046089 kubelet[3562]: E0113 23:43:50.044827 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dgwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bc5d5895-g9wvb_calico-apiserver(d529d459-4c8c-4f5e-b8a4-f53690574272): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:43:50.047074 kubelet[3562]: E0113 23:43:50.046938 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" podUID="d529d459-4c8c-4f5e-b8a4-f53690574272" Jan 13 23:43:50.066301 kubelet[3562]: E0113 23:43:50.065370 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zwrvg" podUID="86622233-a85f-41fd-b458-2112644e82b9" Jan 13 23:43:50.068889 kubelet[3562]: E0113 23:43:50.068702 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" podUID="d529d459-4c8c-4f5e-b8a4-f53690574272" Jan 13 23:43:50.076035 containerd[1990]: time="2026-01-13T23:43:50.071747738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pkh58,Uid:ffeb255f-8b45-4b39-ab1a-757accecb002,Namespace:kube-system,Attempt:0,} returns sandbox id \"e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701\"" Jan 13 23:43:50.088524 kubelet[3562]: E0113 23:43:50.088473 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" podUID="db70aac6-82d4-4ef8-98ae-1ad4091dd76e" Jan 13 23:43:50.093095 containerd[1990]: time="2026-01-13T23:43:50.092649002Z" level=info msg="CreateContainer within sandbox \"e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 23:43:50.097289 containerd[1990]: time="2026-01-13T23:43:50.097183082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lblk8,Uid:884e12a9-b4d3-4695-bc91-5cdf1a464d0b,Namespace:calico-system,Attempt:0,} returns sandbox id \"bcf6822a9e7bc3060191a3cd30794d28bbf71f418aacd4fa4cd31cc03b768d61\"" Jan 13 23:43:50.105867 containerd[1990]: time="2026-01-13T23:43:50.105800654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:43:50.141123 systemd-networkd[1570]: cali11c39e212e5: Gained IPv6LL Jan 13 23:43:50.182175 containerd[1990]: time="2026-01-13T23:43:50.182069175Z" level=info msg="Container 4f3fdb80f13e989ad064d83896764a53ad4f78ded00019f6c22a22176e6a00d2: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:43:50.192000 audit[5408]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=5408 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:50.192000 audit[5408]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc4674500 a2=0 a3=1 items=0 ppid=3666 pid=5408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:50.198000 audit[5408]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=5408 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:50.198000 audit[5408]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc4674500 a2=0 a3=1 items=0 ppid=3666 pid=5408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.198000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:50.201704 containerd[1990]: time="2026-01-13T23:43:50.200864463Z" level=info msg="CreateContainer within sandbox \"e22d57b04b52b8375bdac10e12178211f1332e57723ae44794075fd11a5b8701\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f3fdb80f13e989ad064d83896764a53ad4f78ded00019f6c22a22176e6a00d2\"" Jan 13 23:43:50.204028 containerd[1990]: time="2026-01-13T23:43:50.203365683Z" level=info msg="StartContainer for \"4f3fdb80f13e989ad064d83896764a53ad4f78ded00019f6c22a22176e6a00d2\"" Jan 13 23:43:50.208750 containerd[1990]: time="2026-01-13T23:43:50.208663395Z" level=info msg="connecting to shim 4f3fdb80f13e989ad064d83896764a53ad4f78ded00019f6c22a22176e6a00d2" address="unix:///run/containerd/s/7bfbd543186fd99d1cec46d7894d4cfcb2866337869a001b5b8b17a2e001d15e" protocol=ttrpc version=3 Jan 13 23:43:50.265000 audit[5422]: NETFILTER_CFG table=filter:137 family=2 entries=20 op=nft_register_rule pid=5422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:50.265000 audit[5422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe609a300 a2=0 a3=1 items=0 ppid=3666 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.265000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:50.285540 systemd[1]: Started cri-containerd-4f3fdb80f13e989ad064d83896764a53ad4f78ded00019f6c22a22176e6a00d2.scope - libcontainer container 4f3fdb80f13e989ad064d83896764a53ad4f78ded00019f6c22a22176e6a00d2. Jan 13 23:43:50.305000 audit[5422]: NETFILTER_CFG table=nat:138 family=2 entries=14 op=nft_register_rule pid=5422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:50.305000 audit[5422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe609a300 a2=0 a3=1 items=0 ppid=3666 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.305000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:50.346000 audit: BPF prog-id=248 op=LOAD Jan 13 23:43:50.349000 audit: BPF prog-id=249 op=LOAD Jan 13 23:43:50.349000 audit[5409]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=5335 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466336664623830663133653938396164303634643833383936373634 Jan 13 23:43:50.351000 audit: BPF prog-id=249 op=UNLOAD Jan 13 23:43:50.351000 audit[5409]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5335 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466336664623830663133653938396164303634643833383936373634 Jan 13 23:43:50.352000 audit: BPF prog-id=250 op=LOAD Jan 13 23:43:50.352000 audit[5409]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=5335 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466336664623830663133653938396164303634643833383936373634 Jan 13 23:43:50.352000 audit: BPF prog-id=251 op=LOAD Jan 13 23:43:50.352000 audit[5409]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=5335 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466336664623830663133653938396164303634643833383936373634 Jan 13 23:43:50.352000 audit: BPF prog-id=251 op=UNLOAD Jan 13 23:43:50.352000 audit[5409]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5335 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466336664623830663133653938396164303634643833383936373634 Jan 13 23:43:50.353000 audit: BPF prog-id=250 op=UNLOAD Jan 13 23:43:50.353000 audit[5409]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5335 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466336664623830663133653938396164303634643833383936373634 Jan 13 23:43:50.353000 audit: BPF prog-id=252 op=LOAD Jan 13 23:43:50.353000 audit[5409]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=5335 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466336664623830663133653938396164303634643833383936373634 Jan 13 23:43:50.369952 systemd-networkd[1570]: cali468302edf06: Link UP Jan 13 23:43:50.375291 systemd-networkd[1570]: cali468302edf06: Gained carrier Jan 13 23:43:50.425959 containerd[1990]: time="2026-01-13T23:43:50.425650672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:43:50.428302 containerd[1990]: time="2026-01-13T23:43:50.428212780Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:43:50.428486 containerd[1990]: time="2026-01-13T23:43:50.428359180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:50.431852 kubelet[3562]: E0113 23:43:50.431383 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:43:50.431852 kubelet[3562]: E0113 23:43:50.431449 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:43:50.431852 kubelet[3562]: E0113 23:43:50.431651 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksj5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lblk8_calico-system(884e12a9-b4d3-4695-bc91-5cdf1a464d0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:43:50.434623 containerd[1990]: time="2026-01-13T23:43:50.434518012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:49.932 [INFO][5320] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--dvsmb-eth0 calico-apiserver-6bc5d5895- calico-apiserver 7796067b-5cab-42e9-af9d-320bb4208060 845 0 2026-01-13 23:43:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bc5d5895 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-81 calico-apiserver-6bc5d5895-dvsmb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali468302edf06 [] [] }} ContainerID="70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" Namespace="calico-apiserver" Pod="calico-apiserver-6bc5d5895-dvsmb" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--dvsmb-" Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:49.933 [INFO][5320] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" Namespace="calico-apiserver" Pod="calico-apiserver-6bc5d5895-dvsmb" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--dvsmb-eth0" Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.097 [INFO][5384] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" HandleID="k8s-pod-network.70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" Workload="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--dvsmb-eth0" Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.101 [INFO][5384] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" HandleID="k8s-pod-network.70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" Workload="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--dvsmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000324810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-81", "pod":"calico-apiserver-6bc5d5895-dvsmb", "timestamp":"2026-01-13 23:43:50.097888946 +0000 UTC"}, Hostname:"ip-172-31-22-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.101 [INFO][5384] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.102 [INFO][5384] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.103 [INFO][5384] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-81' Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.153 [INFO][5384] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" host="ip-172-31-22-81" Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.213 [INFO][5384] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-81" Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.253 [INFO][5384] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.287 [INFO][5384] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.296 [INFO][5384] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.296 [INFO][5384] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" host="ip-172-31-22-81" Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.300 [INFO][5384] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11 Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.316 [INFO][5384] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" host="ip-172-31-22-81" Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.335 [INFO][5384] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.136/26] block=192.168.127.128/26 handle="k8s-pod-network.70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" host="ip-172-31-22-81" Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.335 [INFO][5384] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.136/26] handle="k8s-pod-network.70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" host="ip-172-31-22-81" Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.336 [INFO][5384] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:43:50.438648 containerd[1990]: 2026-01-13 23:43:50.336 [INFO][5384] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.136/26] IPv6=[] ContainerID="70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" HandleID="k8s-pod-network.70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" Workload="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--dvsmb-eth0" Jan 13 23:43:50.439706 containerd[1990]: 2026-01-13 23:43:50.345 [INFO][5320] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" Namespace="calico-apiserver" Pod="calico-apiserver-6bc5d5895-dvsmb" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--dvsmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--dvsmb-eth0", GenerateName:"calico-apiserver-6bc5d5895-", Namespace:"calico-apiserver", SelfLink:"", UID:"7796067b-5cab-42e9-af9d-320bb4208060", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bc5d5895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"", Pod:"calico-apiserver-6bc5d5895-dvsmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali468302edf06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:50.439706 containerd[1990]: 2026-01-13 23:43:50.348 [INFO][5320] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.136/32] ContainerID="70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" Namespace="calico-apiserver" Pod="calico-apiserver-6bc5d5895-dvsmb" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--dvsmb-eth0" Jan 13 23:43:50.439706 containerd[1990]: 2026-01-13 23:43:50.348 [INFO][5320] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali468302edf06 ContainerID="70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" Namespace="calico-apiserver" Pod="calico-apiserver-6bc5d5895-dvsmb" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--dvsmb-eth0" Jan 13 23:43:50.439706 containerd[1990]: 2026-01-13 23:43:50.376 [INFO][5320] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" Namespace="calico-apiserver" Pod="calico-apiserver-6bc5d5895-dvsmb" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--dvsmb-eth0" Jan 13 23:43:50.439706 containerd[1990]: 2026-01-13 23:43:50.376 [INFO][5320] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" Namespace="calico-apiserver" Pod="calico-apiserver-6bc5d5895-dvsmb" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--dvsmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--dvsmb-eth0", GenerateName:"calico-apiserver-6bc5d5895-", Namespace:"calico-apiserver", SelfLink:"", UID:"7796067b-5cab-42e9-af9d-320bb4208060", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bc5d5895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11", Pod:"calico-apiserver-6bc5d5895-dvsmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali468302edf06", MAC:"9a:17:74:37:a8:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:50.439706 containerd[1990]: 2026-01-13 23:43:50.419 [INFO][5320] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" Namespace="calico-apiserver" Pod="calico-apiserver-6bc5d5895-dvsmb" WorkloadEndpoint="ip--172--31--22--81-k8s-calico--apiserver--6bc5d5895--dvsmb-eth0" Jan 13 23:43:50.459052 containerd[1990]: time="2026-01-13T23:43:50.457429084Z" level=info msg="StartContainer for \"4f3fdb80f13e989ad064d83896764a53ad4f78ded00019f6c22a22176e6a00d2\" returns successfully" Jan 13 23:43:50.511623 containerd[1990]: time="2026-01-13T23:43:50.511545857Z" level=info msg="connecting to shim 70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11" address="unix:///run/containerd/s/127dccf18f436c5c9ff62f215b41a356a681076722ec973386156891202a772c" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:43:50.610099 systemd[1]: Started cri-containerd-70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11.scope - libcontainer container 70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11. Jan 13 23:43:50.665000 audit[5480]: NETFILTER_CFG table=filter:139 family=2 entries=57 op=nft_register_chain pid=5480 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:43:50.665000 audit[5480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27812 a0=3 a1=fffff79738a0 a2=0 a3=ffff916defa8 items=0 ppid=4696 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.665000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:43:50.746056 containerd[1990]: time="2026-01-13T23:43:50.738805518Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:43:50.746056 containerd[1990]: time="2026-01-13T23:43:50.743712390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:43:50.746056 containerd[1990]: time="2026-01-13T23:43:50.743843562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:50.748043 kubelet[3562]: E0113 23:43:50.747811 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:43:50.748043 kubelet[3562]: E0113 23:43:50.747879 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:43:50.748205 kubelet[3562]: E0113 23:43:50.748038 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksj5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lblk8_calico-system(884e12a9-b4d3-4695-bc91-5cdf1a464d0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:43:50.751023 kubelet[3562]: E0113 23:43:50.749713 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:43:50.787000 audit: BPF prog-id=253 op=LOAD Jan 13 23:43:50.788000 audit: BPF prog-id=254 op=LOAD Jan 13 23:43:50.788000 audit[5467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5455 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363736663530343735626264613634643736383538383130353235 Jan 13 23:43:50.789000 audit: BPF prog-id=254 op=UNLOAD Jan 13 23:43:50.789000 audit[5467]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5455 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363736663530343735626264613634643736383538383130353235 Jan 13 23:43:50.789000 audit: BPF prog-id=255 op=LOAD Jan 13 23:43:50.789000 audit[5467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5455 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363736663530343735626264613634643736383538383130353235 Jan 13 23:43:50.789000 audit: BPF prog-id=256 op=LOAD Jan 13 23:43:50.789000 audit[5467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5455 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363736663530343735626264613634643736383538383130353235 Jan 13 23:43:50.789000 audit: BPF prog-id=256 op=UNLOAD Jan 13 23:43:50.789000 audit[5467]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5455 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363736663530343735626264613634643736383538383130353235 Jan 13 23:43:50.789000 audit: BPF prog-id=255 op=UNLOAD Jan 13 23:43:50.789000 audit[5467]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5455 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363736663530343735626264613634643736383538383130353235 Jan 13 23:43:50.789000 audit: BPF prog-id=257 op=LOAD Jan 13 23:43:50.789000 audit[5467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5455 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:50.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363736663530343735626264613634643736383538383130353235 Jan 13 23:43:50.881718 containerd[1990]: time="2026-01-13T23:43:50.881589846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc5d5895-dvsmb,Uid:7796067b-5cab-42e9-af9d-320bb4208060,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"70676f50475bbda64d76858810525beeb8cd9108212aa8847ea91b12c04d1c11\"" Jan 13 23:43:50.886939 containerd[1990]: time="2026-01-13T23:43:50.886866510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:43:50.908477 systemd-networkd[1570]: calid52787ae101: Gained IPv6LL Jan 13 23:43:51.107864 kubelet[3562]: E0113 23:43:51.107707 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" podUID="db70aac6-82d4-4ef8-98ae-1ad4091dd76e" Jan 13 23:43:51.114743 kubelet[3562]: E0113 23:43:51.114659 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" podUID="d529d459-4c8c-4f5e-b8a4-f53690574272" Jan 13 23:43:51.116036 kubelet[3562]: E0113 23:43:51.115936 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:43:51.118210 kubelet[3562]: E0113 23:43:51.118009 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zwrvg" podUID="86622233-a85f-41fd-b458-2112644e82b9" Jan 13 23:43:51.185187 kubelet[3562]: I0113 23:43:51.184504 3562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pkh58" podStartSLOduration=56.184480444 podStartE2EDuration="56.184480444s" podCreationTimestamp="2026-01-13 23:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:43:51.13891606 +0000 UTC m=+62.831659057" watchObservedRunningTime="2026-01-13 23:43:51.184480444 +0000 UTC m=+62.877223441" Jan 13 23:43:51.200448 containerd[1990]: time="2026-01-13T23:43:51.200368756Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:43:51.202967 containerd[1990]: time="2026-01-13T23:43:51.202885204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:43:51.203283 containerd[1990]: time="2026-01-13T23:43:51.203015536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:51.203426 kubelet[3562]: E0113 23:43:51.203314 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:43:51.203505 kubelet[3562]: E0113 23:43:51.203469 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:43:51.203885 kubelet[3562]: E0113 23:43:51.203762 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-77d2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bc5d5895-dvsmb_calico-apiserver(7796067b-5cab-42e9-af9d-320bb4208060): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:43:51.205497 kubelet[3562]: E0113 23:43:51.205173 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" podUID="7796067b-5cab-42e9-af9d-320bb4208060" Jan 13 23:43:51.281000 audit[5499]: NETFILTER_CFG table=filter:140 family=2 entries=17 op=nft_register_rule pid=5499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:51.281000 audit[5499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc6a342d0 a2=0 a3=1 items=0 ppid=3666 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:51.281000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:51.308000 audit[5499]: NETFILTER_CFG table=nat:141 family=2 entries=35 op=nft_register_chain pid=5499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:51.308000 audit[5499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffc6a342d0 a2=0 a3=1 items=0 ppid=3666 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:51.308000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:51.548983 systemd-networkd[1570]: cali32b04d5117d: Gained IPv6LL Jan 13 23:43:51.597690 containerd[1990]: time="2026-01-13T23:43:51.597402762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rggqk,Uid:866a6379-067c-4a18-817c-a6d7c19adba8,Namespace:kube-system,Attempt:0,}" Jan 13 23:43:51.875736 systemd-networkd[1570]: cali288392b70b3: Link UP Jan 13 23:43:51.879278 systemd-networkd[1570]: cali288392b70b3: Gained carrier Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.717 [INFO][5506] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--81-k8s-coredns--668d6bf9bc--rggqk-eth0 coredns-668d6bf9bc- kube-system 866a6379-067c-4a18-817c-a6d7c19adba8 847 0 2026-01-13 23:42:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-81 coredns-668d6bf9bc-rggqk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali288392b70b3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" Namespace="kube-system" Pod="coredns-668d6bf9bc-rggqk" WorkloadEndpoint="ip--172--31--22--81-k8s-coredns--668d6bf9bc--rggqk-" Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.717 [INFO][5506] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" Namespace="kube-system" Pod="coredns-668d6bf9bc-rggqk" WorkloadEndpoint="ip--172--31--22--81-k8s-coredns--668d6bf9bc--rggqk-eth0" Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.792 [INFO][5518] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" HandleID="k8s-pod-network.a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" Workload="ip--172--31--22--81-k8s-coredns--668d6bf9bc--rggqk-eth0" Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.793 [INFO][5518] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" HandleID="k8s-pod-network.a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" Workload="ip--172--31--22--81-k8s-coredns--668d6bf9bc--rggqk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330520), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-81", "pod":"coredns-668d6bf9bc-rggqk", "timestamp":"2026-01-13 23:43:51.792647515 +0000 UTC"}, Hostname:"ip-172-31-22-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.793 [INFO][5518] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.794 [INFO][5518] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.794 [INFO][5518] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-81' Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.812 [INFO][5518] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" host="ip-172-31-22-81" Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.820 [INFO][5518] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-81" Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.828 [INFO][5518] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.831 [INFO][5518] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.835 [INFO][5518] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-22-81" Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.836 [INFO][5518] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" host="ip-172-31-22-81" Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.838 [INFO][5518] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61 Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.845 [INFO][5518] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" host="ip-172-31-22-81" Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.860 [INFO][5518] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.137/26] block=192.168.127.128/26 handle="k8s-pod-network.a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" host="ip-172-31-22-81" Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.860 [INFO][5518] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.137/26] handle="k8s-pod-network.a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" host="ip-172-31-22-81" Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.860 [INFO][5518] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:43:51.904785 containerd[1990]: 2026-01-13 23:43:51.860 [INFO][5518] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.137/26] IPv6=[] ContainerID="a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" HandleID="k8s-pod-network.a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" Workload="ip--172--31--22--81-k8s-coredns--668d6bf9bc--rggqk-eth0" Jan 13 23:43:51.911560 containerd[1990]: 2026-01-13 23:43:51.866 [INFO][5506] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" Namespace="kube-system" Pod="coredns-668d6bf9bc-rggqk" WorkloadEndpoint="ip--172--31--22--81-k8s-coredns--668d6bf9bc--rggqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-coredns--668d6bf9bc--rggqk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"866a6379-067c-4a18-817c-a6d7c19adba8", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 42, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"", Pod:"coredns-668d6bf9bc-rggqk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali288392b70b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:51.911560 containerd[1990]: 2026-01-13 23:43:51.866 [INFO][5506] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.137/32] ContainerID="a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" Namespace="kube-system" Pod="coredns-668d6bf9bc-rggqk" WorkloadEndpoint="ip--172--31--22--81-k8s-coredns--668d6bf9bc--rggqk-eth0" Jan 13 23:43:51.911560 containerd[1990]: 2026-01-13 23:43:51.866 [INFO][5506] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali288392b70b3 ContainerID="a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" Namespace="kube-system" Pod="coredns-668d6bf9bc-rggqk" WorkloadEndpoint="ip--172--31--22--81-k8s-coredns--668d6bf9bc--rggqk-eth0" Jan 13 23:43:51.911560 containerd[1990]: 2026-01-13 23:43:51.877 [INFO][5506] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" Namespace="kube-system" Pod="coredns-668d6bf9bc-rggqk" WorkloadEndpoint="ip--172--31--22--81-k8s-coredns--668d6bf9bc--rggqk-eth0" Jan 13 23:43:51.911560 containerd[1990]: 2026-01-13 23:43:51.881 [INFO][5506] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" Namespace="kube-system" Pod="coredns-668d6bf9bc-rggqk" WorkloadEndpoint="ip--172--31--22--81-k8s-coredns--668d6bf9bc--rggqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--81-k8s-coredns--668d6bf9bc--rggqk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"866a6379-067c-4a18-817c-a6d7c19adba8", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 42, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-81", ContainerID:"a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61", Pod:"coredns-668d6bf9bc-rggqk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali288392b70b3", MAC:"7e:ec:58:e9:e1:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:43:51.911560 containerd[1990]: 2026-01-13 23:43:51.897 [INFO][5506] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" Namespace="kube-system" Pod="coredns-668d6bf9bc-rggqk" WorkloadEndpoint="ip--172--31--22--81-k8s-coredns--668d6bf9bc--rggqk-eth0" Jan 13 23:43:51.976161 containerd[1990]: time="2026-01-13T23:43:51.975656792Z" level=info msg="connecting to shim a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61" address="unix:///run/containerd/s/54fd907ef46bf8f9370925fe7d375dd198a4fb96422855012dafa31ebf657b71" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:43:52.002000 audit[5545]: NETFILTER_CFG table=filter:142 family=2 entries=56 op=nft_register_chain pid=5545 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:43:52.002000 audit[5545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25080 a0=3 a1=ffffc9fc0db0 a2=0 a3=ffff81198fa8 items=0 ppid=4696 pid=5545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.002000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:43:52.054532 systemd[1]: Started cri-containerd-a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61.scope - libcontainer container a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61. Jan 13 23:43:52.060570 systemd-networkd[1570]: cali468302edf06: Gained IPv6LL Jan 13 23:43:52.083000 audit: BPF prog-id=258 op=LOAD Jan 13 23:43:52.084000 audit: BPF prog-id=259 op=LOAD Jan 13 23:43:52.084000 audit[5556]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5542 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131353262373131373436393730653138613235306166626532633339 Jan 13 23:43:52.084000 audit: BPF prog-id=259 op=UNLOAD Jan 13 23:43:52.084000 audit[5556]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5542 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131353262373131373436393730653138613235306166626532633339 Jan 13 23:43:52.084000 audit: BPF prog-id=260 op=LOAD Jan 13 23:43:52.084000 audit[5556]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5542 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131353262373131373436393730653138613235306166626532633339 Jan 13 23:43:52.085000 audit: BPF prog-id=261 op=LOAD Jan 13 23:43:52.085000 audit[5556]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5542 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131353262373131373436393730653138613235306166626532633339 Jan 13 23:43:52.085000 audit: BPF prog-id=261 op=UNLOAD Jan 13 23:43:52.085000 audit[5556]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5542 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131353262373131373436393730653138613235306166626532633339 Jan 13 23:43:52.085000 audit: BPF prog-id=260 op=UNLOAD Jan 13 23:43:52.085000 audit[5556]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5542 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131353262373131373436393730653138613235306166626532633339 Jan 13 23:43:52.085000 audit: BPF prog-id=262 op=LOAD Jan 13 23:43:52.085000 audit[5556]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5542 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131353262373131373436393730653138613235306166626532633339 Jan 13 23:43:52.110092 kubelet[3562]: E0113 23:43:52.109944 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:43:52.117386 kubelet[3562]: E0113 23:43:52.115875 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" podUID="7796067b-5cab-42e9-af9d-320bb4208060" Jan 13 23:43:52.187337 containerd[1990]: time="2026-01-13T23:43:52.187279817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rggqk,Uid:866a6379-067c-4a18-817c-a6d7c19adba8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61\"" Jan 13 23:43:52.201536 containerd[1990]: time="2026-01-13T23:43:52.201444521Z" level=info msg="CreateContainer within sandbox \"a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 23:43:52.236186 containerd[1990]: time="2026-01-13T23:43:52.233606753Z" level=info msg="Container 8c7392a02194877b24d91405a8d8820934dc187f40004d2cb2b7109e3ff6820d: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:43:52.258323 containerd[1990]: time="2026-01-13T23:43:52.258239729Z" level=info msg="CreateContainer within sandbox \"a152b711746970e18a250afbe2c39e7e2349eface7c0a21c12bedef99e88ad61\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c7392a02194877b24d91405a8d8820934dc187f40004d2cb2b7109e3ff6820d\"" Jan 13 23:43:52.259000 audit[5582]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:52.259000 audit[5582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff916fe90 a2=0 a3=1 items=0 ppid=3666 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:52.262845 containerd[1990]: time="2026-01-13T23:43:52.262712537Z" level=info msg="StartContainer for \"8c7392a02194877b24d91405a8d8820934dc187f40004d2cb2b7109e3ff6820d\"" Jan 13 23:43:52.272223 containerd[1990]: time="2026-01-13T23:43:52.272112617Z" level=info msg="connecting to shim 8c7392a02194877b24d91405a8d8820934dc187f40004d2cb2b7109e3ff6820d" address="unix:///run/containerd/s/54fd907ef46bf8f9370925fe7d375dd198a4fb96422855012dafa31ebf657b71" protocol=ttrpc version=3 Jan 13 23:43:52.286000 audit[5582]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:52.286000 audit[5582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff916fe90 a2=0 a3=1 items=0 ppid=3666 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.286000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:52.337526 systemd[1]: Started cri-containerd-8c7392a02194877b24d91405a8d8820934dc187f40004d2cb2b7109e3ff6820d.scope - libcontainer container 8c7392a02194877b24d91405a8d8820934dc187f40004d2cb2b7109e3ff6820d. Jan 13 23:43:52.379000 audit: BPF prog-id=263 op=LOAD Jan 13 23:43:52.381000 audit: BPF prog-id=264 op=LOAD Jan 13 23:43:52.381000 audit[5583]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=5542 pid=5583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863373339326130323139343837376232346439313430356138643838 Jan 13 23:43:52.382000 audit: BPF prog-id=264 op=UNLOAD Jan 13 23:43:52.382000 audit[5583]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5542 pid=5583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863373339326130323139343837376232346439313430356138643838 Jan 13 23:43:52.382000 audit: BPF prog-id=265 op=LOAD Jan 13 23:43:52.382000 audit[5583]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=5542 pid=5583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863373339326130323139343837376232346439313430356138643838 Jan 13 23:43:52.382000 audit: BPF prog-id=266 op=LOAD Jan 13 23:43:52.382000 audit[5583]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=5542 pid=5583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863373339326130323139343837376232346439313430356138643838 Jan 13 23:43:52.382000 audit: BPF prog-id=266 op=UNLOAD Jan 13 23:43:52.382000 audit[5583]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5542 pid=5583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863373339326130323139343837376232346439313430356138643838 Jan 13 23:43:52.382000 audit: BPF prog-id=265 op=UNLOAD Jan 13 23:43:52.382000 audit[5583]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5542 pid=5583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863373339326130323139343837376232346439313430356138643838 Jan 13 23:43:52.382000 audit: BPF prog-id=267 op=LOAD Jan 13 23:43:52.382000 audit[5583]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=5542 pid=5583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:52.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863373339326130323139343837376232346439313430356138643838 Jan 13 23:43:52.437023 containerd[1990]: time="2026-01-13T23:43:52.436965486Z" level=info msg="StartContainer for \"8c7392a02194877b24d91405a8d8820934dc187f40004d2cb2b7109e3ff6820d\" returns successfully" Jan 13 23:43:53.164491 kubelet[3562]: I0113 23:43:53.164393 3562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rggqk" podStartSLOduration=58.164368674 podStartE2EDuration="58.164368674s" podCreationTimestamp="2026-01-13 23:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:43:53.13752921 +0000 UTC m=+64.830272219" watchObservedRunningTime="2026-01-13 23:43:53.164368674 +0000 UTC m=+64.857111671" Jan 13 23:43:53.210000 audit[5616]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=5616 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:53.212987 kernel: kauditd_printk_skb: 236 callbacks suppressed Jan 13 23:43:53.213123 kernel: audit: type=1325 audit(1768347833.210:764): table=filter:145 family=2 entries=14 op=nft_register_rule pid=5616 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:53.210000 audit[5616]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe4cd8a00 a2=0 a3=1 items=0 ppid=3666 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:53.224924 kernel: audit: type=1300 audit(1768347833.210:764): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe4cd8a00 a2=0 a3=1 items=0 ppid=3666 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:53.229442 kernel: audit: type=1327 audit(1768347833.210:764): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:53.210000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:53.237000 audit[5616]: NETFILTER_CFG table=nat:146 family=2 entries=56 op=nft_register_chain pid=5616 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:53.237000 audit[5616]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe4cd8a00 a2=0 a3=1 items=0 ppid=3666 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:53.249312 kernel: audit: type=1325 audit(1768347833.237:765): table=nat:146 family=2 entries=56 op=nft_register_chain pid=5616 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:43:53.249454 kernel: audit: type=1300 audit(1768347833.237:765): arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe4cd8a00 a2=0 a3=1 items=0 ppid=3666 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:53.237000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:53.252784 kernel: audit: type=1327 audit(1768347833.237:765): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:43:53.596453 systemd-networkd[1570]: cali288392b70b3: Gained IPv6LL Jan 13 23:43:55.812705 ntpd[1935]: Listen normally on 6 vxlan.calico 192.168.127.128:123 Jan 13 23:43:55.812806 ntpd[1935]: Listen normally on 7 vxlan.calico [fe80::64bb:90ff:fe09:96f1%4]:123 Jan 13 23:43:55.813351 ntpd[1935]: 13 Jan 23:43:55 ntpd[1935]: Listen normally on 6 vxlan.calico 192.168.127.128:123 Jan 13 23:43:55.813351 ntpd[1935]: 13 Jan 23:43:55 ntpd[1935]: Listen normally on 7 vxlan.calico [fe80::64bb:90ff:fe09:96f1%4]:123 Jan 13 23:43:55.813351 ntpd[1935]: 13 Jan 23:43:55 ntpd[1935]: Listen normally on 8 cali03e1dff9f78 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 13 23:43:55.813351 ntpd[1935]: 13 Jan 23:43:55 ntpd[1935]: Listen normally on 9 cali56b756c4269 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 23:43:55.813351 ntpd[1935]: 13 Jan 23:43:55 ntpd[1935]: Listen normally on 10 calidc7f545cbc7 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 23:43:55.813351 ntpd[1935]: 13 Jan 23:43:55 ntpd[1935]: Listen normally on 11 cali703880400e3 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 13 23:43:55.813351 ntpd[1935]: 13 Jan 23:43:55 ntpd[1935]: Listen normally on 12 cali11c39e212e5 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 13 23:43:55.813351 ntpd[1935]: 13 Jan 23:43:55 ntpd[1935]: Listen normally on 13 calid52787ae101 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 23:43:55.813351 ntpd[1935]: 13 Jan 23:43:55 ntpd[1935]: Listen normally on 14 cali32b04d5117d [fe80::ecee:eeff:feee:eeee%13]:123 Jan 13 23:43:55.813351 ntpd[1935]: 13 Jan 23:43:55 ntpd[1935]: Listen normally on 15 cali468302edf06 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 13 23:43:55.813351 ntpd[1935]: 13 Jan 23:43:55 ntpd[1935]: Listen normally on 16 cali288392b70b3 [fe80::ecee:eeff:feee:eeee%15]:123 Jan 13 23:43:55.812857 ntpd[1935]: Listen normally on 8 cali03e1dff9f78 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 13 23:43:55.812930 ntpd[1935]: Listen normally on 9 cali56b756c4269 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 23:43:55.812997 ntpd[1935]: Listen normally on 10 calidc7f545cbc7 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 23:43:55.813045 ntpd[1935]: Listen normally on 11 cali703880400e3 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 13 23:43:55.813090 ntpd[1935]: Listen normally on 12 cali11c39e212e5 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 13 23:43:55.813183 ntpd[1935]: Listen normally on 13 calid52787ae101 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 23:43:55.813243 ntpd[1935]: Listen normally on 14 cali32b04d5117d [fe80::ecee:eeff:feee:eeee%13]:123 Jan 13 23:43:55.813293 ntpd[1935]: Listen normally on 15 cali468302edf06 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 13 23:43:55.813340 ntpd[1935]: Listen normally on 16 cali288392b70b3 [fe80::ecee:eeff:feee:eeee%15]:123 Jan 13 23:43:59.600072 containerd[1990]: time="2026-01-13T23:43:59.598337750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:43:59.875883 containerd[1990]: time="2026-01-13T23:43:59.875705343Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:43:59.878620 containerd[1990]: time="2026-01-13T23:43:59.878535195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:43:59.878763 containerd[1990]: time="2026-01-13T23:43:59.878667723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:59.879060 kubelet[3562]: E0113 23:43:59.878953 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:43:59.879060 kubelet[3562]: E0113 23:43:59.879024 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:43:59.879700 kubelet[3562]: E0113 23:43:59.879391 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d4af49205dc04bf3a26511b093d7fa31,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f4qrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78b56f8d66-w4wmx_calico-system(d5bf06bf-e923-4a15-849e-51c08230b88e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:43:59.882672 containerd[1990]: time="2026-01-13T23:43:59.882607071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:44:00.132950 containerd[1990]: time="2026-01-13T23:44:00.132775368Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:00.135681 containerd[1990]: time="2026-01-13T23:44:00.135591624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:44:00.136243 containerd[1990]: time="2026-01-13T23:44:00.135730908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:00.136474 kubelet[3562]: E0113 23:44:00.135941 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:44:00.136474 kubelet[3562]: E0113 23:44:00.136009 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:44:00.137351 kubelet[3562]: E0113 23:44:00.137206 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4qrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78b56f8d66-w4wmx_calico-system(d5bf06bf-e923-4a15-849e-51c08230b88e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:00.138711 kubelet[3562]: E0113 23:44:00.138624 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b56f8d66-w4wmx" podUID="d5bf06bf-e923-4a15-849e-51c08230b88e" Jan 13 23:44:02.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.22.81:22-20.161.92.111:36748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:02.507624 systemd[1]: Started sshd@7-172.31.22.81:22-20.161.92.111:36748.service - OpenSSH per-connection server daemon (20.161.92.111:36748). Jan 13 23:44:02.516203 kernel: audit: type=1130 audit(1768347842.506:766): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.22.81:22-20.161.92.111:36748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:02.602631 containerd[1990]: time="2026-01-13T23:44:02.602288789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:44:02.861252 containerd[1990]: time="2026-01-13T23:44:02.859455006Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:02.862440 containerd[1990]: time="2026-01-13T23:44:02.862376274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:44:02.862749 containerd[1990]: time="2026-01-13T23:44:02.862630014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:02.863313 kubelet[3562]: E0113 23:44:02.863264 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:44:02.863936 kubelet[3562]: E0113 23:44:02.863896 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:44:02.864473 kubelet[3562]: E0113 23:44:02.864385 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dgwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bc5d5895-g9wvb_calico-apiserver(d529d459-4c8c-4f5e-b8a4-f53690574272): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:02.865394 containerd[1990]: time="2026-01-13T23:44:02.865330938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:44:02.867169 kubelet[3562]: E0113 23:44:02.866527 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" podUID="d529d459-4c8c-4f5e-b8a4-f53690574272" Jan 13 23:44:03.059000 audit[5638]: USER_ACCT pid=5638 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:03.067007 sshd[5638]: Accepted publickey for core from 20.161.92.111 port 36748 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:44:03.071057 sshd-session[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:44:03.066000 audit[5638]: CRED_ACQ pid=5638 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:03.081203 kernel: audit: type=1101 audit(1768347843.059:767): pid=5638 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:03.081329 kernel: audit: type=1103 audit(1768347843.066:768): pid=5638 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:03.081392 kernel: audit: type=1006 audit(1768347843.066:769): pid=5638 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 13 23:44:03.066000 audit[5638]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff977fd50 a2=3 a3=0 items=0 ppid=1 pid=5638 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:03.090042 kernel: audit: type=1300 audit(1768347843.066:769): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff977fd50 a2=3 a3=0 items=0 ppid=1 pid=5638 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:03.066000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:03.092870 kernel: audit: type=1327 audit(1768347843.066:769): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:03.100898 systemd-logind[1947]: New session 9 of user core. Jan 13 23:44:03.105492 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 23:44:03.110000 audit[5638]: USER_START pid=5638 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:03.114000 audit[5642]: CRED_ACQ pid=5642 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:03.126179 kernel: audit: type=1105 audit(1768347843.110:770): pid=5638 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:03.126273 kernel: audit: type=1103 audit(1768347843.114:771): pid=5642 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:03.138282 containerd[1990]: time="2026-01-13T23:44:03.138205503Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:03.140638 containerd[1990]: time="2026-01-13T23:44:03.140465451Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:44:03.140638 containerd[1990]: time="2026-01-13T23:44:03.140548911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:03.141951 kubelet[3562]: E0113 23:44:03.141010 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:44:03.141951 kubelet[3562]: E0113 23:44:03.141070 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:44:03.141951 kubelet[3562]: E0113 23:44:03.141311 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmwts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zwrvg_calico-system(86622233-a85f-41fd-b458-2112644e82b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:03.143379 kubelet[3562]: E0113 23:44:03.143322 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zwrvg" podUID="86622233-a85f-41fd-b458-2112644e82b9" Jan 13 23:44:03.532656 sshd[5642]: Connection closed by 20.161.92.111 port 36748 Jan 13 23:44:03.533563 sshd-session[5638]: pam_unix(sshd:session): session closed for user core Jan 13 23:44:03.535000 audit[5638]: USER_END pid=5638 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:03.544281 systemd[1]: sshd@7-172.31.22.81:22-20.161.92.111:36748.service: Deactivated successfully. Jan 13 23:44:03.535000 audit[5638]: CRED_DISP pid=5638 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:03.551175 kernel: audit: type=1106 audit(1768347843.535:772): pid=5638 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:03.551306 kernel: audit: type=1104 audit(1768347843.535:773): pid=5638 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:03.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.22.81:22-20.161.92.111:36748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:03.552830 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 23:44:03.556999 systemd-logind[1947]: Session 9 logged out. Waiting for processes to exit. Jan 13 23:44:03.560952 systemd-logind[1947]: Removed session 9. Jan 13 23:44:03.600182 containerd[1990]: time="2026-01-13T23:44:03.598412658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:44:03.855734 containerd[1990]: time="2026-01-13T23:44:03.855552307Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:03.860391 containerd[1990]: time="2026-01-13T23:44:03.860191039Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:44:03.860391 containerd[1990]: time="2026-01-13T23:44:03.860324371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:03.860896 kubelet[3562]: E0113 23:44:03.860789 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:44:03.860993 kubelet[3562]: E0113 23:44:03.860918 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:44:03.861919 kubelet[3562]: E0113 23:44:03.861799 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c9dddc9f7-sd8kc_calico-apiserver(db70aac6-82d4-4ef8-98ae-1ad4091dd76e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:03.863296 kubelet[3562]: E0113 23:44:03.863170 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" podUID="db70aac6-82d4-4ef8-98ae-1ad4091dd76e" Jan 13 23:44:04.604944 containerd[1990]: time="2026-01-13T23:44:04.604181683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:44:04.869364 containerd[1990]: time="2026-01-13T23:44:04.869103596Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:04.871653 containerd[1990]: time="2026-01-13T23:44:04.871548524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:44:04.871653 containerd[1990]: time="2026-01-13T23:44:04.871611824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:04.872095 kubelet[3562]: E0113 23:44:04.871890 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:44:04.872095 kubelet[3562]: E0113 23:44:04.871949 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:44:04.872687 kubelet[3562]: E0113 23:44:04.872170 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csplk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b8d4f74f9-kqpzz_calico-system(3317981a-15b4-41f8-a3cf-26fbd9c6fbf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:04.874065 kubelet[3562]: E0113 23:44:04.874007 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" podUID="3317981a-15b4-41f8-a3cf-26fbd9c6fbf1" Jan 13 23:44:05.598953 containerd[1990]: time="2026-01-13T23:44:05.598216735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:44:05.855110 containerd[1990]: time="2026-01-13T23:44:05.854748957Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:05.857330 containerd[1990]: time="2026-01-13T23:44:05.857115105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:44:05.857330 containerd[1990]: time="2026-01-13T23:44:05.857230257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:05.857654 kubelet[3562]: E0113 23:44:05.857580 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:44:05.857654 kubelet[3562]: E0113 23:44:05.857658 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:44:05.858173 kubelet[3562]: E0113 23:44:05.857862 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksj5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lblk8_calico-system(884e12a9-b4d3-4695-bc91-5cdf1a464d0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:05.861305 containerd[1990]: time="2026-01-13T23:44:05.861239829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:44:06.123749 containerd[1990]: time="2026-01-13T23:44:06.123600630Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:06.125947 containerd[1990]: time="2026-01-13T23:44:06.125878086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:44:06.126094 containerd[1990]: time="2026-01-13T23:44:06.126002478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:06.126382 kubelet[3562]: E0113 23:44:06.126335 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:44:06.127615 kubelet[3562]: E0113 23:44:06.126878 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:44:06.127615 kubelet[3562]: E0113 23:44:06.127039 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksj5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lblk8_calico-system(884e12a9-b4d3-4695-bc91-5cdf1a464d0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:06.128426 kubelet[3562]: E0113 23:44:06.128268 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:44:06.603036 containerd[1990]: time="2026-01-13T23:44:06.602822936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:44:06.866210 containerd[1990]: time="2026-01-13T23:44:06.865870558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:06.868481 containerd[1990]: time="2026-01-13T23:44:06.868320442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:44:06.868774 containerd[1990]: time="2026-01-13T23:44:06.868389970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:06.869277 kubelet[3562]: E0113 23:44:06.869118 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:44:06.869639 kubelet[3562]: E0113 23:44:06.869465 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:44:06.870061 kubelet[3562]: E0113 23:44:06.869926 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-77d2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bc5d5895-dvsmb_calico-apiserver(7796067b-5cab-42e9-af9d-320bb4208060): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:06.871634 kubelet[3562]: E0113 23:44:06.871559 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" podUID="7796067b-5cab-42e9-af9d-320bb4208060" Jan 13 23:44:08.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.22.81:22-20.161.92.111:36758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:08.625495 systemd[1]: Started sshd@8-172.31.22.81:22-20.161.92.111:36758.service - OpenSSH per-connection server daemon (20.161.92.111:36758). Jan 13 23:44:08.627803 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:44:08.627882 kernel: audit: type=1130 audit(1768347848.624:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.22.81:22-20.161.92.111:36758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:09.103000 audit[5661]: USER_ACCT pid=5661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:09.104784 sshd[5661]: Accepted publickey for core from 20.161.92.111 port 36758 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:44:09.110000 audit[5661]: CRED_ACQ pid=5661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:09.117320 kernel: audit: type=1101 audit(1768347849.103:776): pid=5661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:09.117444 kernel: audit: type=1103 audit(1768347849.110:777): pid=5661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:09.113303 sshd-session[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:44:09.121350 kernel: audit: type=1006 audit(1768347849.110:778): pid=5661 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 13 23:44:09.110000 audit[5661]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb1cede0 a2=3 a3=0 items=0 ppid=1 pid=5661 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:09.127873 kernel: audit: type=1300 audit(1768347849.110:778): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb1cede0 a2=3 a3=0 items=0 ppid=1 pid=5661 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:09.110000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:09.130354 kernel: audit: type=1327 audit(1768347849.110:778): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:09.140835 systemd-logind[1947]: New session 10 of user core. Jan 13 23:44:09.145502 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 23:44:09.151000 audit[5661]: USER_START pid=5661 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:09.156000 audit[5665]: CRED_ACQ pid=5665 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:09.165507 kernel: audit: type=1105 audit(1768347849.151:779): pid=5661 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:09.165699 kernel: audit: type=1103 audit(1768347849.156:780): pid=5665 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:09.573820 sshd[5665]: Connection closed by 20.161.92.111 port 36758 Jan 13 23:44:09.575461 sshd-session[5661]: pam_unix(sshd:session): session closed for user core Jan 13 23:44:09.577000 audit[5661]: USER_END pid=5661 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:09.579000 audit[5661]: CRED_DISP pid=5661 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:09.586968 systemd[1]: sshd@8-172.31.22.81:22-20.161.92.111:36758.service: Deactivated successfully. Jan 13 23:44:09.590696 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 23:44:09.591801 kernel: audit: type=1106 audit(1768347849.577:781): pid=5661 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:09.591875 kernel: audit: type=1104 audit(1768347849.579:782): pid=5661 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:09.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.22.81:22-20.161.92.111:36758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:09.596517 systemd-logind[1947]: Session 10 logged out. Waiting for processes to exit. Jan 13 23:44:09.599343 systemd-logind[1947]: Removed session 10. Jan 13 23:44:13.598890 kubelet[3562]: E0113 23:44:13.598799 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" podUID="d529d459-4c8c-4f5e-b8a4-f53690574272" Jan 13 23:44:14.670379 systemd[1]: Started sshd@9-172.31.22.81:22-20.161.92.111:41774.service - OpenSSH per-connection server daemon (20.161.92.111:41774). Jan 13 23:44:14.674201 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:44:14.674323 kernel: audit: type=1130 audit(1768347854.669:784): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.22.81:22-20.161.92.111:41774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:14.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.22.81:22-20.161.92.111:41774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:15.146000 audit[5705]: USER_ACCT pid=5705 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:15.154123 sshd[5705]: Accepted publickey for core from 20.161.92.111 port 41774 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:44:15.153000 audit[5705]: CRED_ACQ pid=5705 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:15.156918 sshd-session[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:44:15.160501 kernel: audit: type=1101 audit(1768347855.146:785): pid=5705 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:15.160607 kernel: audit: type=1103 audit(1768347855.153:786): pid=5705 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:15.165424 kernel: audit: type=1006 audit(1768347855.153:787): pid=5705 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 13 23:44:15.153000 audit[5705]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff57a0f40 a2=3 a3=0 items=0 ppid=1 pid=5705 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:15.172404 kernel: audit: type=1300 audit(1768347855.153:787): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff57a0f40 a2=3 a3=0 items=0 ppid=1 pid=5705 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:15.153000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:15.174879 kernel: audit: type=1327 audit(1768347855.153:787): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:15.184252 systemd-logind[1947]: New session 11 of user core. Jan 13 23:44:15.191487 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 23:44:15.197000 audit[5705]: USER_START pid=5705 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:15.203000 audit[5710]: CRED_ACQ pid=5710 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:15.210775 kernel: audit: type=1105 audit(1768347855.197:788): pid=5705 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:15.210918 kernel: audit: type=1103 audit(1768347855.203:789): pid=5710 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:15.518289 sshd[5710]: Connection closed by 20.161.92.111 port 41774 Jan 13 23:44:15.519400 sshd-session[5705]: pam_unix(sshd:session): session closed for user core Jan 13 23:44:15.521000 audit[5705]: USER_END pid=5705 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:15.521000 audit[5705]: CRED_DISP pid=5705 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:15.531301 kernel: audit: type=1106 audit(1768347855.521:790): pid=5705 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:15.531943 systemd[1]: sshd@9-172.31.22.81:22-20.161.92.111:41774.service: Deactivated successfully. Jan 13 23:44:15.536797 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 23:44:15.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.22.81:22-20.161.92.111:41774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:15.539427 kernel: audit: type=1104 audit(1768347855.521:791): pid=5705 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:15.540325 systemd-logind[1947]: Session 11 logged out. Waiting for processes to exit. Jan 13 23:44:15.544302 systemd-logind[1947]: Removed session 11. Jan 13 23:44:15.599864 kubelet[3562]: E0113 23:44:15.599790 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zwrvg" podUID="86622233-a85f-41fd-b458-2112644e82b9" Jan 13 23:44:15.606273 kubelet[3562]: E0113 23:44:15.606199 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b56f8d66-w4wmx" podUID="d5bf06bf-e923-4a15-849e-51c08230b88e" Jan 13 23:44:15.625145 systemd[1]: Started sshd@10-172.31.22.81:22-20.161.92.111:41784.service - OpenSSH per-connection server daemon (20.161.92.111:41784). Jan 13 23:44:15.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.22.81:22-20.161.92.111:41784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:16.127000 audit[5722]: USER_ACCT pid=5722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:16.128996 sshd[5722]: Accepted publickey for core from 20.161.92.111 port 41784 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:44:16.129000 audit[5722]: CRED_ACQ pid=5722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:16.129000 audit[5722]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc62ebdd0 a2=3 a3=0 items=0 ppid=1 pid=5722 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:16.129000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:16.132689 sshd-session[5722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:44:16.144250 systemd-logind[1947]: New session 12 of user core. Jan 13 23:44:16.153513 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 23:44:16.158000 audit[5722]: USER_START pid=5722 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:16.162000 audit[5726]: CRED_ACQ pid=5726 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:16.592197 sshd[5726]: Connection closed by 20.161.92.111 port 41784 Jan 13 23:44:16.593397 sshd-session[5722]: pam_unix(sshd:session): session closed for user core Jan 13 23:44:16.595000 audit[5722]: USER_END pid=5722 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:16.595000 audit[5722]: CRED_DISP pid=5722 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:16.604672 systemd-logind[1947]: Session 12 logged out. Waiting for processes to exit. Jan 13 23:44:16.605366 systemd[1]: sshd@10-172.31.22.81:22-20.161.92.111:41784.service: Deactivated successfully. Jan 13 23:44:16.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.22.81:22-20.161.92.111:41784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:16.611173 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 23:44:16.615756 systemd-logind[1947]: Removed session 12. Jan 13 23:44:16.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.22.81:22-20.161.92.111:41796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:16.697263 systemd[1]: Started sshd@11-172.31.22.81:22-20.161.92.111:41796.service - OpenSSH per-connection server daemon (20.161.92.111:41796). Jan 13 23:44:17.164000 audit[5735]: USER_ACCT pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:17.166383 sshd[5735]: Accepted publickey for core from 20.161.92.111 port 41796 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:44:17.167000 audit[5735]: CRED_ACQ pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:17.168000 audit[5735]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffadd2690 a2=3 a3=0 items=0 ppid=1 pid=5735 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:17.168000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:17.171038 sshd-session[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:44:17.183070 systemd-logind[1947]: New session 13 of user core. Jan 13 23:44:17.189008 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 23:44:17.199000 audit[5735]: USER_START pid=5735 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:17.202000 audit[5739]: CRED_ACQ pid=5739 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:17.548526 sshd[5739]: Connection closed by 20.161.92.111 port 41796 Jan 13 23:44:17.547431 sshd-session[5735]: pam_unix(sshd:session): session closed for user core Jan 13 23:44:17.551000 audit[5735]: USER_END pid=5735 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:17.552000 audit[5735]: CRED_DISP pid=5735 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:17.558963 systemd-logind[1947]: Session 13 logged out. Waiting for processes to exit. Jan 13 23:44:17.561479 systemd[1]: sshd@11-172.31.22.81:22-20.161.92.111:41796.service: Deactivated successfully. Jan 13 23:44:17.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.22.81:22-20.161.92.111:41796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:17.567858 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 23:44:17.574511 systemd-logind[1947]: Removed session 13. Jan 13 23:44:17.599857 kubelet[3562]: E0113 23:44:17.599777 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" podUID="3317981a-15b4-41f8-a3cf-26fbd9c6fbf1" Jan 13 23:44:18.614031 kubelet[3562]: E0113 23:44:18.612692 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" podUID="7796067b-5cab-42e9-af9d-320bb4208060" Jan 13 23:44:18.616898 kubelet[3562]: E0113 23:44:18.614973 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" podUID="db70aac6-82d4-4ef8-98ae-1ad4091dd76e" Jan 13 23:44:18.618966 kubelet[3562]: E0113 23:44:18.618524 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:44:22.642763 systemd[1]: Started sshd@12-172.31.22.81:22-20.161.92.111:47874.service - OpenSSH per-connection server daemon (20.161.92.111:47874). Jan 13 23:44:22.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.22.81:22-20.161.92.111:47874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:22.646156 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 13 23:44:22.646270 kernel: audit: type=1130 audit(1768347862.642:811): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.22.81:22-20.161.92.111:47874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:23.110000 audit[5759]: USER_ACCT pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:23.111980 sshd[5759]: Accepted publickey for core from 20.161.92.111 port 47874 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:44:23.136299 kernel: audit: type=1101 audit(1768347863.110:812): pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:23.135000 audit[5759]: CRED_ACQ pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:23.138831 sshd-session[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:44:23.146240 kernel: audit: type=1103 audit(1768347863.135:813): pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:23.146427 kernel: audit: type=1006 audit(1768347863.135:814): pid=5759 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 13 23:44:23.146478 kernel: audit: type=1300 audit(1768347863.135:814): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffbc08180 a2=3 a3=0 items=0 ppid=1 pid=5759 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:23.135000 audit[5759]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffbc08180 a2=3 a3=0 items=0 ppid=1 pid=5759 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:23.135000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:23.155318 kernel: audit: type=1327 audit(1768347863.135:814): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:23.160499 systemd-logind[1947]: New session 14 of user core. Jan 13 23:44:23.168455 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 23:44:23.173000 audit[5759]: USER_START pid=5759 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:23.183902 kernel: audit: type=1105 audit(1768347863.173:815): pid=5759 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:23.184020 kernel: audit: type=1103 audit(1768347863.182:816): pid=5763 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:23.182000 audit[5763]: CRED_ACQ pid=5763 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:23.574207 sshd[5763]: Connection closed by 20.161.92.111 port 47874 Jan 13 23:44:23.576386 sshd-session[5759]: pam_unix(sshd:session): session closed for user core Jan 13 23:44:23.578000 audit[5759]: USER_END pid=5759 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:23.592641 kernel: audit: type=1106 audit(1768347863.578:817): pid=5759 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:23.592907 kernel: audit: type=1104 audit(1768347863.578:818): pid=5759 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:23.578000 audit[5759]: CRED_DISP pid=5759 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:23.587431 systemd[1]: sshd@12-172.31.22.81:22-20.161.92.111:47874.service: Deactivated successfully. Jan 13 23:44:23.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.22.81:22-20.161.92.111:47874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:23.594542 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 23:44:23.597248 systemd-logind[1947]: Session 14 logged out. Waiting for processes to exit. Jan 13 23:44:23.603017 systemd-logind[1947]: Removed session 14. Jan 13 23:44:27.599858 containerd[1990]: time="2026-01-13T23:44:27.599715965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:44:27.890254 containerd[1990]: time="2026-01-13T23:44:27.890177514Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:27.892246 containerd[1990]: time="2026-01-13T23:44:27.891942798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:44:27.892246 containerd[1990]: time="2026-01-13T23:44:27.891950262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:27.893002 kubelet[3562]: E0113 23:44:27.892804 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:44:27.894046 kubelet[3562]: E0113 23:44:27.893190 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:44:27.894993 kubelet[3562]: E0113 23:44:27.894782 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dgwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bc5d5895-g9wvb_calico-apiserver(d529d459-4c8c-4f5e-b8a4-f53690574272): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:27.896234 kubelet[3562]: E0113 23:44:27.896096 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" podUID="d529d459-4c8c-4f5e-b8a4-f53690574272" Jan 13 23:44:28.666034 systemd[1]: Started sshd@13-172.31.22.81:22-20.161.92.111:47882.service - OpenSSH per-connection server daemon (20.161.92.111:47882). Jan 13 23:44:28.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.22.81:22-20.161.92.111:47882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:28.668386 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:44:28.668510 kernel: audit: type=1130 audit(1768347868.665:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.22.81:22-20.161.92.111:47882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:29.145288 sshd[5783]: Accepted publickey for core from 20.161.92.111 port 47882 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:44:29.137000 audit[5783]: USER_ACCT pid=5783 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:29.146260 kernel: audit: type=1101 audit(1768347869.137:821): pid=5783 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:29.145000 audit[5783]: CRED_ACQ pid=5783 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:29.148302 sshd-session[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:44:29.157801 kernel: audit: type=1103 audit(1768347869.145:822): pid=5783 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:29.157926 kernel: audit: type=1006 audit(1768347869.145:823): pid=5783 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 13 23:44:29.145000 audit[5783]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff2823080 a2=3 a3=0 items=0 ppid=1 pid=5783 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:29.164931 kernel: audit: type=1300 audit(1768347869.145:823): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff2823080 a2=3 a3=0 items=0 ppid=1 pid=5783 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:29.145000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:29.168017 kernel: audit: type=1327 audit(1768347869.145:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:29.174997 systemd-logind[1947]: New session 15 of user core. Jan 13 23:44:29.179452 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 23:44:29.185000 audit[5783]: USER_START pid=5783 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:29.194228 kernel: audit: type=1105 audit(1768347869.185:824): pid=5783 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:29.193000 audit[5787]: CRED_ACQ pid=5787 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:29.200223 kernel: audit: type=1103 audit(1768347869.193:825): pid=5787 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:29.538294 sshd[5787]: Connection closed by 20.161.92.111 port 47882 Jan 13 23:44:29.540203 sshd-session[5783]: pam_unix(sshd:session): session closed for user core Jan 13 23:44:29.543000 audit[5783]: USER_END pid=5783 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:29.550684 systemd[1]: sshd@13-172.31.22.81:22-20.161.92.111:47882.service: Deactivated successfully. Jan 13 23:44:29.543000 audit[5783]: CRED_DISP pid=5783 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:29.556047 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 23:44:29.556678 kernel: audit: type=1106 audit(1768347869.543:826): pid=5783 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:29.556790 kernel: audit: type=1104 audit(1768347869.543:827): pid=5783 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:29.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.22.81:22-20.161.92.111:47882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:29.560412 systemd-logind[1947]: Session 15 logged out. Waiting for processes to exit. Jan 13 23:44:29.563423 systemd-logind[1947]: Removed session 15. Jan 13 23:44:29.601275 containerd[1990]: time="2026-01-13T23:44:29.601088623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:44:29.897603 containerd[1990]: time="2026-01-13T23:44:29.897251576Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:29.900272 containerd[1990]: time="2026-01-13T23:44:29.900191996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:44:29.900545 containerd[1990]: time="2026-01-13T23:44:29.900337628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:29.902724 kubelet[3562]: E0113 23:44:29.900775 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:44:29.902724 kubelet[3562]: E0113 23:44:29.900883 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:44:29.902724 kubelet[3562]: E0113 23:44:29.901041 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d4af49205dc04bf3a26511b093d7fa31,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f4qrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78b56f8d66-w4wmx_calico-system(d5bf06bf-e923-4a15-849e-51c08230b88e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:29.905647 containerd[1990]: time="2026-01-13T23:44:29.905319932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:44:30.166921 containerd[1990]: time="2026-01-13T23:44:30.166743029Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:30.169901 containerd[1990]: time="2026-01-13T23:44:30.169658717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:44:30.169901 containerd[1990]: time="2026-01-13T23:44:30.169802561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:30.170401 kubelet[3562]: E0113 23:44:30.170080 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:44:30.170401 kubelet[3562]: E0113 23:44:30.170187 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:44:30.170672 kubelet[3562]: E0113 23:44:30.170369 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4qrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78b56f8d66-w4wmx_calico-system(d5bf06bf-e923-4a15-849e-51c08230b88e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:30.172369 kubelet[3562]: E0113 23:44:30.172190 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b56f8d66-w4wmx" podUID="d5bf06bf-e923-4a15-849e-51c08230b88e" Jan 13 23:44:30.602870 containerd[1990]: time="2026-01-13T23:44:30.601965836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:44:30.862709 containerd[1990]: time="2026-01-13T23:44:30.862310169Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:30.865227 containerd[1990]: time="2026-01-13T23:44:30.865025973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:44:30.865227 containerd[1990]: time="2026-01-13T23:44:30.865092933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:30.865467 kubelet[3562]: E0113 23:44:30.865377 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:44:30.865467 kubelet[3562]: E0113 23:44:30.865441 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:44:30.866006 kubelet[3562]: E0113 23:44:30.865882 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-77d2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bc5d5895-dvsmb_calico-apiserver(7796067b-5cab-42e9-af9d-320bb4208060): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:30.867268 containerd[1990]: time="2026-01-13T23:44:30.867103845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:44:30.871709 kubelet[3562]: E0113 23:44:30.869988 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" podUID="7796067b-5cab-42e9-af9d-320bb4208060" Jan 13 23:44:31.143758 containerd[1990]: time="2026-01-13T23:44:31.143631258Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:31.149454 containerd[1990]: time="2026-01-13T23:44:31.149239170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:44:31.149454 containerd[1990]: time="2026-01-13T23:44:31.149324550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:31.149694 kubelet[3562]: E0113 23:44:31.149585 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:44:31.149694 kubelet[3562]: E0113 23:44:31.149657 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:44:31.150632 kubelet[3562]: E0113 23:44:31.149883 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmwts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zwrvg_calico-system(86622233-a85f-41fd-b458-2112644e82b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:31.151781 kubelet[3562]: E0113 23:44:31.151663 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zwrvg" podUID="86622233-a85f-41fd-b458-2112644e82b9" Jan 13 23:44:31.599497 containerd[1990]: time="2026-01-13T23:44:31.599335617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:44:31.889432 containerd[1990]: time="2026-01-13T23:44:31.889212670Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:31.896168 containerd[1990]: time="2026-01-13T23:44:31.894561034Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:44:31.896168 containerd[1990]: time="2026-01-13T23:44:31.894701614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:31.896439 kubelet[3562]: E0113 23:44:31.896300 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:44:31.896439 kubelet[3562]: E0113 23:44:31.896398 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:44:31.896958 kubelet[3562]: E0113 23:44:31.896831 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csplk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b8d4f74f9-kqpzz_calico-system(3317981a-15b4-41f8-a3cf-26fbd9c6fbf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:31.898298 kubelet[3562]: E0113 23:44:31.898228 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" podUID="3317981a-15b4-41f8-a3cf-26fbd9c6fbf1" Jan 13 23:44:31.899278 containerd[1990]: time="2026-01-13T23:44:31.899198506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:44:32.158671 containerd[1990]: time="2026-01-13T23:44:32.157299583Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:32.160823 containerd[1990]: time="2026-01-13T23:44:32.160718407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:44:32.161032 containerd[1990]: time="2026-01-13T23:44:32.160865707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:32.161117 kubelet[3562]: E0113 23:44:32.161064 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:44:32.161986 kubelet[3562]: E0113 23:44:32.161905 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:44:32.162866 kubelet[3562]: E0113 23:44:32.162225 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c9dddc9f7-sd8kc_calico-apiserver(db70aac6-82d4-4ef8-98ae-1ad4091dd76e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:32.164009 kubelet[3562]: E0113 23:44:32.163929 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" podUID="db70aac6-82d4-4ef8-98ae-1ad4091dd76e" Jan 13 23:44:33.599545 containerd[1990]: time="2026-01-13T23:44:33.599430683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:44:33.898419 containerd[1990]: time="2026-01-13T23:44:33.898268148Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:33.900747 containerd[1990]: time="2026-01-13T23:44:33.900689304Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:44:33.901028 containerd[1990]: time="2026-01-13T23:44:33.900724332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:33.901308 kubelet[3562]: E0113 23:44:33.901212 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:44:33.901804 kubelet[3562]: E0113 23:44:33.901306 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:44:33.903460 kubelet[3562]: E0113 23:44:33.903249 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksj5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lblk8_calico-system(884e12a9-b4d3-4695-bc91-5cdf1a464d0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:33.907493 containerd[1990]: time="2026-01-13T23:44:33.907447680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:44:34.191610 containerd[1990]: time="2026-01-13T23:44:34.191444613Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:44:34.193832 containerd[1990]: time="2026-01-13T23:44:34.193681665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:44:34.194532 containerd[1990]: time="2026-01-13T23:44:34.193755657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:44:34.194664 kubelet[3562]: E0113 23:44:34.194194 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:44:34.194664 kubelet[3562]: E0113 23:44:34.194256 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:44:34.194664 kubelet[3562]: E0113 23:44:34.194422 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksj5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lblk8_calico-system(884e12a9-b4d3-4695-bc91-5cdf1a464d0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:44:34.196418 kubelet[3562]: E0113 23:44:34.196326 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:44:34.637119 systemd[1]: Started sshd@14-172.31.22.81:22-20.161.92.111:38792.service - OpenSSH per-connection server daemon (20.161.92.111:38792). Jan 13 23:44:34.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.22.81:22-20.161.92.111:38792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:34.639956 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:44:34.640069 kernel: audit: type=1130 audit(1768347874.636:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.22.81:22-20.161.92.111:38792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:35.110000 audit[5801]: USER_ACCT pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:35.118471 sshd[5801]: Accepted publickey for core from 20.161.92.111 port 38792 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:44:35.117000 audit[5801]: CRED_ACQ pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:35.125255 kernel: audit: type=1101 audit(1768347875.110:830): pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:35.125389 kernel: audit: type=1103 audit(1768347875.117:831): pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:35.120889 sshd-session[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:44:35.129684 kernel: audit: type=1006 audit(1768347875.118:832): pid=5801 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 13 23:44:35.129943 kernel: audit: type=1300 audit(1768347875.118:832): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdbbf6f80 a2=3 a3=0 items=0 ppid=1 pid=5801 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:35.118000 audit[5801]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdbbf6f80 a2=3 a3=0 items=0 ppid=1 pid=5801 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:35.118000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:35.138722 kernel: audit: type=1327 audit(1768347875.118:832): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:35.143611 systemd-logind[1947]: New session 16 of user core. Jan 13 23:44:35.159542 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 23:44:35.165000 audit[5801]: USER_START pid=5801 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:35.174397 kernel: audit: type=1105 audit(1768347875.165:833): pid=5801 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:35.173000 audit[5805]: CRED_ACQ pid=5805 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:35.180216 kernel: audit: type=1103 audit(1768347875.173:834): pid=5805 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:35.488247 sshd[5805]: Connection closed by 20.161.92.111 port 38792 Jan 13 23:44:35.489198 sshd-session[5801]: pam_unix(sshd:session): session closed for user core Jan 13 23:44:35.493000 audit[5801]: USER_END pid=5801 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:35.493000 audit[5801]: CRED_DISP pid=5801 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:35.503928 systemd[1]: sshd@14-172.31.22.81:22-20.161.92.111:38792.service: Deactivated successfully. Jan 13 23:44:35.507880 kernel: audit: type=1106 audit(1768347875.493:835): pid=5801 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:35.508582 kernel: audit: type=1104 audit(1768347875.493:836): pid=5801 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:35.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.22.81:22-20.161.92.111:38792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:35.510163 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 23:44:35.515498 systemd-logind[1947]: Session 16 logged out. Waiting for processes to exit. Jan 13 23:44:35.517945 systemd-logind[1947]: Removed session 16. Jan 13 23:44:40.588853 systemd[1]: Started sshd@15-172.31.22.81:22-20.161.92.111:38804.service - OpenSSH per-connection server daemon (20.161.92.111:38804). Jan 13 23:44:40.592346 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:44:40.592464 kernel: audit: type=1130 audit(1768347880.587:838): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.22.81:22-20.161.92.111:38804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:40.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.22.81:22-20.161.92.111:38804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:40.604824 kubelet[3562]: E0113 23:44:40.603966 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" podUID="d529d459-4c8c-4f5e-b8a4-f53690574272" Jan 13 23:44:41.103000 audit[5817]: USER_ACCT pid=5817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:41.104995 sshd[5817]: Accepted publickey for core from 20.161.92.111 port 38804 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:44:41.109000 audit[5817]: CRED_ACQ pid=5817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:41.112672 sshd-session[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:44:41.116815 kernel: audit: type=1101 audit(1768347881.103:839): pid=5817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:41.116977 kernel: audit: type=1103 audit(1768347881.109:840): pid=5817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:41.120852 kernel: audit: type=1006 audit(1768347881.109:841): pid=5817 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 13 23:44:41.109000 audit[5817]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe7571d00 a2=3 a3=0 items=0 ppid=1 pid=5817 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:41.129474 kernel: audit: type=1300 audit(1768347881.109:841): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe7571d00 a2=3 a3=0 items=0 ppid=1 pid=5817 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:41.109000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:41.134187 kernel: audit: type=1327 audit(1768347881.109:841): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:41.137891 systemd-logind[1947]: New session 17 of user core. Jan 13 23:44:41.143475 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 23:44:41.149000 audit[5817]: USER_START pid=5817 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:41.157000 audit[5821]: CRED_ACQ pid=5821 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:41.164181 kernel: audit: type=1105 audit(1768347881.149:842): pid=5817 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:41.164328 kernel: audit: type=1103 audit(1768347881.157:843): pid=5821 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:41.478587 sshd[5821]: Connection closed by 20.161.92.111 port 38804 Jan 13 23:44:41.478475 sshd-session[5817]: pam_unix(sshd:session): session closed for user core Jan 13 23:44:41.480000 audit[5817]: USER_END pid=5817 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:41.490664 systemd[1]: sshd@15-172.31.22.81:22-20.161.92.111:38804.service: Deactivated successfully. Jan 13 23:44:41.497347 kernel: audit: type=1106 audit(1768347881.480:844): pid=5817 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:41.497483 kernel: audit: type=1104 audit(1768347881.480:845): pid=5817 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:41.480000 audit[5817]: CRED_DISP pid=5817 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:41.495642 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 23:44:41.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.22.81:22-20.161.92.111:38804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:41.499288 systemd-logind[1947]: Session 17 logged out. Waiting for processes to exit. Jan 13 23:44:41.503824 systemd-logind[1947]: Removed session 17. Jan 13 23:44:41.575276 systemd[1]: Started sshd@16-172.31.22.81:22-20.161.92.111:38812.service - OpenSSH per-connection server daemon (20.161.92.111:38812). Jan 13 23:44:41.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.22.81:22-20.161.92.111:38812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:42.052000 audit[5832]: USER_ACCT pid=5832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:42.053477 sshd[5832]: Accepted publickey for core from 20.161.92.111 port 38812 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:44:42.053000 audit[5832]: CRED_ACQ pid=5832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:42.054000 audit[5832]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffaa679b0 a2=3 a3=0 items=0 ppid=1 pid=5832 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:42.054000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:42.056998 sshd-session[5832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:44:42.066617 systemd-logind[1947]: New session 18 of user core. Jan 13 23:44:42.075533 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 23:44:42.080000 audit[5832]: USER_START pid=5832 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:42.084000 audit[5836]: CRED_ACQ pid=5836 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:43.499227 sshd[5836]: Connection closed by 20.161.92.111 port 38812 Jan 13 23:44:43.500283 sshd-session[5832]: pam_unix(sshd:session): session closed for user core Jan 13 23:44:43.503000 audit[5832]: USER_END pid=5832 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:43.504000 audit[5832]: CRED_DISP pid=5832 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:43.511891 systemd[1]: sshd@16-172.31.22.81:22-20.161.92.111:38812.service: Deactivated successfully. Jan 13 23:44:43.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.22.81:22-20.161.92.111:38812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:43.518013 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 23:44:43.524221 systemd-logind[1947]: Session 18 logged out. Waiting for processes to exit. Jan 13 23:44:43.527364 systemd-logind[1947]: Removed session 18. Jan 13 23:44:43.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.22.81:22-20.161.92.111:55828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:43.599951 systemd[1]: Started sshd@17-172.31.22.81:22-20.161.92.111:55828.service - OpenSSH per-connection server daemon (20.161.92.111:55828). Jan 13 23:44:43.607041 kubelet[3562]: E0113 23:44:43.606870 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" podUID="7796067b-5cab-42e9-af9d-320bb4208060" Jan 13 23:44:44.094000 audit[5846]: USER_ACCT pid=5846 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:44.095927 sshd[5846]: Accepted publickey for core from 20.161.92.111 port 55828 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:44:44.097000 audit[5846]: CRED_ACQ pid=5846 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:44.097000 audit[5846]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe752aa00 a2=3 a3=0 items=0 ppid=1 pid=5846 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:44.097000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:44.101677 sshd-session[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:44:44.114253 systemd-logind[1947]: New session 19 of user core. Jan 13 23:44:44.119478 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 23:44:44.128000 audit[5846]: USER_START pid=5846 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:44.133000 audit[5876]: CRED_ACQ pid=5876 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:44.602449 kubelet[3562]: E0113 23:44:44.601490 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" podUID="3317981a-15b4-41f8-a3cf-26fbd9c6fbf1" Jan 13 23:44:44.603531 kubelet[3562]: E0113 23:44:44.603400 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zwrvg" podUID="86622233-a85f-41fd-b458-2112644e82b9" Jan 13 23:44:44.607753 kubelet[3562]: E0113 23:44:44.607510 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b56f8d66-w4wmx" podUID="d5bf06bf-e923-4a15-849e-51c08230b88e" Jan 13 23:44:45.522000 audit[5909]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=5909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:44:45.522000 audit[5909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffdede3170 a2=0 a3=1 items=0 ppid=3666 pid=5909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:45.522000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:44:45.531000 audit[5909]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=5909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:44:45.531000 audit[5909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdede3170 a2=0 a3=1 items=0 ppid=3666 pid=5909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:45.531000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:44:45.582629 sshd[5876]: Connection closed by 20.161.92.111 port 55828 Jan 13 23:44:45.583729 sshd-session[5846]: pam_unix(sshd:session): session closed for user core Jan 13 23:44:45.598347 kernel: kauditd_printk_skb: 26 callbacks suppressed Jan 13 23:44:45.598532 kernel: audit: type=1325 audit(1768347885.591:864): table=filter:149 family=2 entries=38 op=nft_register_rule pid=5911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:44:45.591000 audit[5911]: NETFILTER_CFG table=filter:149 family=2 entries=38 op=nft_register_rule pid=5911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:44:45.591000 audit[5911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffeb9937f0 a2=0 a3=1 items=0 ppid=3666 pid=5911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:45.607806 kernel: audit: type=1300 audit(1768347885.591:864): arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffeb9937f0 a2=0 a3=1 items=0 ppid=3666 pid=5911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:45.607944 kernel: audit: type=1327 audit(1768347885.591:864): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:44:45.591000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:44:45.610000 audit[5911]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=5911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:44:45.610000 audit[5911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffeb9937f0 a2=0 a3=1 items=0 ppid=3666 pid=5911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:45.626059 kernel: audit: type=1325 audit(1768347885.610:865): table=nat:150 family=2 entries=20 op=nft_register_rule pid=5911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:44:45.626214 kernel: audit: type=1300 audit(1768347885.610:865): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffeb9937f0 a2=0 a3=1 items=0 ppid=3666 pid=5911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:45.610000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:44:45.630511 kernel: audit: type=1327 audit(1768347885.610:865): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:44:45.622000 audit[5846]: USER_END pid=5846 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:45.633253 systemd[1]: sshd@17-172.31.22.81:22-20.161.92.111:55828.service: Deactivated successfully. Jan 13 23:44:45.633687 systemd-logind[1947]: Session 19 logged out. Waiting for processes to exit. Jan 13 23:44:45.639722 kernel: audit: type=1106 audit(1768347885.622:866): pid=5846 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:45.643556 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 23:44:45.644267 kernel: audit: type=1104 audit(1768347885.622:867): pid=5846 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:45.622000 audit[5846]: CRED_DISP pid=5846 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:45.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.22.81:22-20.161.92.111:55828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:45.658227 kernel: audit: type=1131 audit(1768347885.634:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.22.81:22-20.161.92.111:55828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:45.657364 systemd-logind[1947]: Removed session 19. Jan 13 23:44:45.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.22.81:22-20.161.92.111:55838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:45.682997 systemd[1]: Started sshd@18-172.31.22.81:22-20.161.92.111:55838.service - OpenSSH per-connection server daemon (20.161.92.111:55838). Jan 13 23:44:45.690214 kernel: audit: type=1130 audit(1768347885.683:869): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.22.81:22-20.161.92.111:55838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:46.193000 audit[5916]: USER_ACCT pid=5916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:46.194558 sshd[5916]: Accepted publickey for core from 20.161.92.111 port 55838 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:44:46.195000 audit[5916]: CRED_ACQ pid=5916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:46.196000 audit[5916]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdee23300 a2=3 a3=0 items=0 ppid=1 pid=5916 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:46.196000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:46.199243 sshd-session[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:44:46.209420 systemd-logind[1947]: New session 20 of user core. Jan 13 23:44:46.217652 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 23:44:46.224000 audit[5916]: USER_START pid=5916 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:46.227000 audit[5920]: CRED_ACQ pid=5920 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:46.861232 sshd[5920]: Connection closed by 20.161.92.111 port 55838 Jan 13 23:44:46.862947 sshd-session[5916]: pam_unix(sshd:session): session closed for user core Jan 13 23:44:46.865000 audit[5916]: USER_END pid=5916 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:46.865000 audit[5916]: CRED_DISP pid=5916 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:46.873012 systemd[1]: sshd@18-172.31.22.81:22-20.161.92.111:55838.service: Deactivated successfully. Jan 13 23:44:46.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.22.81:22-20.161.92.111:55838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:46.878715 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 23:44:46.880875 systemd-logind[1947]: Session 20 logged out. Waiting for processes to exit. Jan 13 23:44:46.885656 systemd-logind[1947]: Removed session 20. Jan 13 23:44:46.953632 systemd[1]: Started sshd@19-172.31.22.81:22-20.161.92.111:55844.service - OpenSSH per-connection server daemon (20.161.92.111:55844). Jan 13 23:44:46.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.22.81:22-20.161.92.111:55844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:47.423000 audit[5930]: USER_ACCT pid=5930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:47.425240 sshd[5930]: Accepted publickey for core from 20.161.92.111 port 55844 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:44:47.425000 audit[5930]: CRED_ACQ pid=5930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:47.426000 audit[5930]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffedf05180 a2=3 a3=0 items=0 ppid=1 pid=5930 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:47.426000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:47.428949 sshd-session[5930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:44:47.437743 systemd-logind[1947]: New session 21 of user core. Jan 13 23:44:47.448617 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 23:44:47.455000 audit[5930]: USER_START pid=5930 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:47.458000 audit[5934]: CRED_ACQ pid=5934 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:47.599612 kubelet[3562]: E0113 23:44:47.599299 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" podUID="db70aac6-82d4-4ef8-98ae-1ad4091dd76e" Jan 13 23:44:47.602581 kubelet[3562]: E0113 23:44:47.601782 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:44:47.829538 sshd[5934]: Connection closed by 20.161.92.111 port 55844 Jan 13 23:44:47.828006 sshd-session[5930]: pam_unix(sshd:session): session closed for user core Jan 13 23:44:47.831000 audit[5930]: USER_END pid=5930 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:47.832000 audit[5930]: CRED_DISP pid=5930 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:47.838011 systemd[1]: sshd@19-172.31.22.81:22-20.161.92.111:55844.service: Deactivated successfully. Jan 13 23:44:47.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.22.81:22-20.161.92.111:55844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:47.843172 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 23:44:47.848269 systemd-logind[1947]: Session 21 logged out. Waiting for processes to exit. Jan 13 23:44:47.851258 systemd-logind[1947]: Removed session 21. Jan 13 23:44:52.921574 systemd[1]: Started sshd@20-172.31.22.81:22-20.161.92.111:60816.service - OpenSSH per-connection server daemon (20.161.92.111:60816). Jan 13 23:44:52.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.22.81:22-20.161.92.111:60816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:52.923459 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 13 23:44:52.925224 kernel: audit: type=1130 audit(1768347892.920:887): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.22.81:22-20.161.92.111:60816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:53.401000 audit[5950]: USER_ACCT pid=5950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:53.402607 sshd[5950]: Accepted publickey for core from 20.161.92.111 port 60816 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:44:53.409000 audit[5950]: CRED_ACQ pid=5950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:53.414049 sshd-session[5950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:44:53.417046 kernel: audit: type=1101 audit(1768347893.401:888): pid=5950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:53.417225 kernel: audit: type=1103 audit(1768347893.409:889): pid=5950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:53.427072 kernel: audit: type=1006 audit(1768347893.409:890): pid=5950 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 13 23:44:53.409000 audit[5950]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3e95890 a2=3 a3=0 items=0 ppid=1 pid=5950 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:53.436354 kernel: audit: type=1300 audit(1768347893.409:890): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3e95890 a2=3 a3=0 items=0 ppid=1 pid=5950 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:53.440715 systemd-logind[1947]: New session 22 of user core. Jan 13 23:44:53.409000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:53.445485 kernel: audit: type=1327 audit(1768347893.409:890): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:53.448777 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 23:44:53.465000 audit[5950]: USER_START pid=5950 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:53.476000 audit[5955]: CRED_ACQ pid=5955 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:53.483024 kernel: audit: type=1105 audit(1768347893.465:891): pid=5950 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:53.483929 kernel: audit: type=1103 audit(1768347893.476:892): pid=5955 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:53.514000 audit[5956]: NETFILTER_CFG table=filter:151 family=2 entries=26 op=nft_register_rule pid=5956 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:44:53.514000 audit[5956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc419b860 a2=0 a3=1 items=0 ppid=3666 pid=5956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:53.526217 kernel: audit: type=1325 audit(1768347893.514:893): table=filter:151 family=2 entries=26 op=nft_register_rule pid=5956 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:44:53.526345 kernel: audit: type=1300 audit(1768347893.514:893): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc419b860 a2=0 a3=1 items=0 ppid=3666 pid=5956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:53.514000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:44:53.542000 audit[5956]: NETFILTER_CFG table=nat:152 family=2 entries=104 op=nft_register_chain pid=5956 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:44:53.542000 audit[5956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffc419b860 a2=0 a3=1 items=0 ppid=3666 pid=5956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:53.542000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:44:53.816248 sshd[5955]: Connection closed by 20.161.92.111 port 60816 Jan 13 23:44:53.818420 sshd-session[5950]: pam_unix(sshd:session): session closed for user core Jan 13 23:44:53.821000 audit[5950]: USER_END pid=5950 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:53.821000 audit[5950]: CRED_DISP pid=5950 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:53.830930 systemd-logind[1947]: Session 22 logged out. Waiting for processes to exit. Jan 13 23:44:53.831902 systemd[1]: sshd@20-172.31.22.81:22-20.161.92.111:60816.service: Deactivated successfully. Jan 13 23:44:53.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.22.81:22-20.161.92.111:60816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:53.841647 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 23:44:53.851972 systemd-logind[1947]: Removed session 22. Jan 13 23:44:54.601186 kubelet[3562]: E0113 23:44:54.600768 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" podUID="d529d459-4c8c-4f5e-b8a4-f53690574272" Jan 13 23:44:54.602703 kubelet[3562]: E0113 23:44:54.602576 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" podUID="7796067b-5cab-42e9-af9d-320bb4208060" Jan 13 23:44:57.599613 kubelet[3562]: E0113 23:44:57.599496 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b56f8d66-w4wmx" podUID="d5bf06bf-e923-4a15-849e-51c08230b88e" Jan 13 23:44:58.606304 kubelet[3562]: E0113 23:44:58.606217 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zwrvg" podUID="86622233-a85f-41fd-b458-2112644e82b9" Jan 13 23:44:58.615119 kubelet[3562]: E0113 23:44:58.614961 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:44:58.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.22.81:22-20.161.92.111:60822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:58.916695 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 13 23:44:58.916783 kernel: audit: type=1130 audit(1768347898.912:898): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.22.81:22-20.161.92.111:60822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:58.913759 systemd[1]: Started sshd@21-172.31.22.81:22-20.161.92.111:60822.service - OpenSSH per-connection server daemon (20.161.92.111:60822). Jan 13 23:44:59.403000 audit[5971]: USER_ACCT pid=5971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:59.404986 sshd[5971]: Accepted publickey for core from 20.161.92.111 port 60822 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:44:59.412000 audit[5971]: CRED_ACQ pid=5971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:59.417991 sshd-session[5971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:44:59.419622 kernel: audit: type=1101 audit(1768347899.403:899): pid=5971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:59.419735 kernel: audit: type=1103 audit(1768347899.412:900): pid=5971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:59.419799 kernel: audit: type=1006 audit(1768347899.415:901): pid=5971 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 13 23:44:59.423301 kernel: audit: type=1300 audit(1768347899.415:901): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd538ad70 a2=3 a3=0 items=0 ppid=1 pid=5971 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:59.415000 audit[5971]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd538ad70 a2=3 a3=0 items=0 ppid=1 pid=5971 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:44:59.415000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:59.431869 kernel: audit: type=1327 audit(1768347899.415:901): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:44:59.444549 systemd-logind[1947]: New session 23 of user core. Jan 13 23:44:59.451824 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 23:44:59.520000 audit[5971]: USER_START pid=5971 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:59.530000 audit[5975]: CRED_ACQ pid=5975 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:59.537907 kernel: audit: type=1105 audit(1768347899.520:902): pid=5971 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:59.538047 kernel: audit: type=1103 audit(1768347899.530:903): pid=5975 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:59.599261 kubelet[3562]: E0113 23:44:59.598779 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" podUID="3317981a-15b4-41f8-a3cf-26fbd9c6fbf1" Jan 13 23:44:59.927157 sshd[5975]: Connection closed by 20.161.92.111 port 60822 Jan 13 23:44:59.928006 sshd-session[5971]: pam_unix(sshd:session): session closed for user core Jan 13 23:44:59.931000 audit[5971]: USER_END pid=5971 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:59.939615 systemd-logind[1947]: Session 23 logged out. Waiting for processes to exit. Jan 13 23:44:59.941033 systemd[1]: sshd@21-172.31.22.81:22-20.161.92.111:60822.service: Deactivated successfully. Jan 13 23:44:59.932000 audit[5971]: CRED_DISP pid=5971 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:59.950507 kernel: audit: type=1106 audit(1768347899.931:904): pid=5971 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:59.952270 kernel: audit: type=1104 audit(1768347899.932:905): pid=5971 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:44:59.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.22.81:22-20.161.92.111:60822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:44:59.954450 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 23:44:59.963476 systemd-logind[1947]: Removed session 23. Jan 13 23:45:02.599210 kubelet[3562]: E0113 23:45:02.598243 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" podUID="db70aac6-82d4-4ef8-98ae-1ad4091dd76e" Jan 13 23:45:05.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.22.81:22-20.161.92.111:59354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:05.033311 systemd[1]: Started sshd@22-172.31.22.81:22-20.161.92.111:59354.service - OpenSSH per-connection server daemon (20.161.92.111:59354). Jan 13 23:45:05.037215 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:45:05.037351 kernel: audit: type=1130 audit(1768347905.032:907): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.22.81:22-20.161.92.111:59354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:05.532000 audit[5988]: USER_ACCT pid=5988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:05.534206 sshd[5988]: Accepted publickey for core from 20.161.92.111 port 59354 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:45:05.541201 kernel: audit: type=1101 audit(1768347905.532:908): pid=5988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:05.540000 audit[5988]: CRED_ACQ pid=5988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:05.546168 sshd-session[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:45:05.551274 kernel: audit: type=1103 audit(1768347905.540:909): pid=5988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:05.551463 kernel: audit: type=1006 audit(1768347905.541:910): pid=5988 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 13 23:45:05.553461 kernel: audit: type=1300 audit(1768347905.541:910): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee595b10 a2=3 a3=0 items=0 ppid=1 pid=5988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:05.541000 audit[5988]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee595b10 a2=3 a3=0 items=0 ppid=1 pid=5988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:05.541000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:45:05.563059 kernel: audit: type=1327 audit(1768347905.541:910): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:45:05.570520 systemd-logind[1947]: New session 24 of user core. Jan 13 23:45:05.580880 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 23:45:05.590000 audit[5988]: USER_START pid=5988 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:05.601206 kernel: audit: type=1105 audit(1768347905.590:911): pid=5988 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:05.600000 audit[5992]: CRED_ACQ pid=5992 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:05.607905 kubelet[3562]: E0113 23:45:05.607852 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" podUID="d529d459-4c8c-4f5e-b8a4-f53690574272" Jan 13 23:45:05.610196 kernel: audit: type=1103 audit(1768347905.600:912): pid=5992 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:05.962430 sshd[5992]: Connection closed by 20.161.92.111 port 59354 Jan 13 23:45:05.964257 sshd-session[5988]: pam_unix(sshd:session): session closed for user core Jan 13 23:45:05.967000 audit[5988]: USER_END pid=5988 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:05.981359 systemd[1]: sshd@22-172.31.22.81:22-20.161.92.111:59354.service: Deactivated successfully. Jan 13 23:45:05.968000 audit[5988]: CRED_DISP pid=5988 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:05.986615 kernel: audit: type=1106 audit(1768347905.967:913): pid=5988 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:05.986744 kernel: audit: type=1104 audit(1768347905.968:914): pid=5988 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:05.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.22.81:22-20.161.92.111:59354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:05.988822 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 23:45:05.995439 systemd-logind[1947]: Session 24 logged out. Waiting for processes to exit. Jan 13 23:45:06.001433 systemd-logind[1947]: Removed session 24. Jan 13 23:45:07.598023 kubelet[3562]: E0113 23:45:07.597939 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" podUID="7796067b-5cab-42e9-af9d-320bb4208060" Jan 13 23:45:08.603690 kubelet[3562]: E0113 23:45:08.603508 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b56f8d66-w4wmx" podUID="d5bf06bf-e923-4a15-849e-51c08230b88e" Jan 13 23:45:11.060998 systemd[1]: Started sshd@23-172.31.22.81:22-20.161.92.111:59362.service - OpenSSH per-connection server daemon (20.161.92.111:59362). Jan 13 23:45:11.065160 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:45:11.065300 kernel: audit: type=1130 audit(1768347911.060:916): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.22.81:22-20.161.92.111:59362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:11.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.22.81:22-20.161.92.111:59362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:11.583000 audit[6011]: USER_ACCT pid=6011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:11.590872 sshd[6011]: Accepted publickey for core from 20.161.92.111 port 59362 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:45:11.594793 sshd-session[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:45:11.591000 audit[6011]: CRED_ACQ pid=6011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:11.602080 kernel: audit: type=1101 audit(1768347911.583:917): pid=6011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:11.602365 kernel: audit: type=1103 audit(1768347911.591:918): pid=6011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:11.602422 containerd[1990]: time="2026-01-13T23:45:11.601730219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:45:11.604581 kubelet[3562]: E0113 23:45:11.603737 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:45:11.611478 kernel: audit: type=1006 audit(1768347911.591:919): pid=6011 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 13 23:45:11.591000 audit[6011]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc9bf8550 a2=3 a3=0 items=0 ppid=1 pid=6011 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:11.620337 kernel: audit: type=1300 audit(1768347911.591:919): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc9bf8550 a2=3 a3=0 items=0 ppid=1 pid=6011 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:11.591000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:45:11.628464 kernel: audit: type=1327 audit(1768347911.591:919): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:45:11.629051 systemd-logind[1947]: New session 25 of user core. Jan 13 23:45:11.636547 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 23:45:11.652000 audit[6011]: USER_START pid=6011 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:11.664219 kernel: audit: type=1105 audit(1768347911.652:920): pid=6011 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:11.662000 audit[6015]: CRED_ACQ pid=6015 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:11.671185 kernel: audit: type=1103 audit(1768347911.662:921): pid=6015 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:11.899152 containerd[1990]: time="2026-01-13T23:45:11.898892617Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:45:11.901160 containerd[1990]: time="2026-01-13T23:45:11.900120541Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:45:11.901160 containerd[1990]: time="2026-01-13T23:45:11.900305245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:45:11.901656 kubelet[3562]: E0113 23:45:11.901553 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:45:11.901968 kubelet[3562]: E0113 23:45:11.901814 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:45:11.904510 kubelet[3562]: E0113 23:45:11.904329 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmwts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zwrvg_calico-system(86622233-a85f-41fd-b458-2112644e82b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:45:11.906084 kubelet[3562]: E0113 23:45:11.906000 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zwrvg" podUID="86622233-a85f-41fd-b458-2112644e82b9" Jan 13 23:45:12.015363 sshd[6015]: Connection closed by 20.161.92.111 port 59362 Jan 13 23:45:12.016231 sshd-session[6011]: pam_unix(sshd:session): session closed for user core Jan 13 23:45:12.020000 audit[6011]: USER_END pid=6011 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:12.030353 systemd[1]: sshd@23-172.31.22.81:22-20.161.92.111:59362.service: Deactivated successfully. Jan 13 23:45:12.020000 audit[6011]: CRED_DISP pid=6011 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:12.038757 kernel: audit: type=1106 audit(1768347912.020:922): pid=6011 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:12.038894 kernel: audit: type=1104 audit(1768347912.020:923): pid=6011 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:12.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.22.81:22-20.161.92.111:59362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:12.042945 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 23:45:12.051004 systemd-logind[1947]: Session 25 logged out. Waiting for processes to exit. Jan 13 23:45:12.054230 systemd-logind[1947]: Removed session 25. Jan 13 23:45:14.598457 containerd[1990]: time="2026-01-13T23:45:14.598393310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:45:14.901410 containerd[1990]: time="2026-01-13T23:45:14.901344136Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:45:14.902776 containerd[1990]: time="2026-01-13T23:45:14.902633476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:45:14.903175 containerd[1990]: time="2026-01-13T23:45:14.902958952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:45:14.903271 kubelet[3562]: E0113 23:45:14.903026 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:45:14.903271 kubelet[3562]: E0113 23:45:14.903203 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:45:14.904714 kubelet[3562]: E0113 23:45:14.903582 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csplk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b8d4f74f9-kqpzz_calico-system(3317981a-15b4-41f8-a3cf-26fbd9c6fbf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:45:14.905478 kubelet[3562]: E0113 23:45:14.905234 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" podUID="3317981a-15b4-41f8-a3cf-26fbd9c6fbf1" Jan 13 23:45:16.599834 containerd[1990]: time="2026-01-13T23:45:16.599641192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:45:16.868919 containerd[1990]: time="2026-01-13T23:45:16.868586453Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:45:16.870351 containerd[1990]: time="2026-01-13T23:45:16.870086945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:45:16.870351 containerd[1990]: time="2026-01-13T23:45:16.870264737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:45:16.870588 kubelet[3562]: E0113 23:45:16.870487 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:45:16.870588 kubelet[3562]: E0113 23:45:16.870551 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:45:16.871389 kubelet[3562]: E0113 23:45:16.870709 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c9dddc9f7-sd8kc_calico-apiserver(db70aac6-82d4-4ef8-98ae-1ad4091dd76e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:45:16.873180 kubelet[3562]: E0113 23:45:16.872391 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" podUID="db70aac6-82d4-4ef8-98ae-1ad4091dd76e" Jan 13 23:45:17.113686 systemd[1]: Started sshd@24-172.31.22.81:22-20.161.92.111:59034.service - OpenSSH per-connection server daemon (20.161.92.111:59034). Jan 13 23:45:17.122458 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:45:17.122521 kernel: audit: type=1130 audit(1768347917.112:925): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.22.81:22-20.161.92.111:59034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:17.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.22.81:22-20.161.92.111:59034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:17.591000 audit[6055]: USER_ACCT pid=6055 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:17.599251 sshd[6055]: Accepted publickey for core from 20.161.92.111 port 59034 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:45:17.602172 containerd[1990]: time="2026-01-13T23:45:17.601909313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:45:17.603683 kernel: audit: type=1101 audit(1768347917.591:926): pid=6055 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:17.604000 audit[6055]: CRED_ACQ pid=6055 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:17.612339 sshd-session[6055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:45:17.618678 kernel: audit: type=1103 audit(1768347917.604:927): pid=6055 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:17.620068 kernel: audit: type=1006 audit(1768347917.609:928): pid=6055 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 13 23:45:17.609000 audit[6055]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe77cfc80 a2=3 a3=0 items=0 ppid=1 pid=6055 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:17.628470 kernel: audit: type=1300 audit(1768347917.609:928): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe77cfc80 a2=3 a3=0 items=0 ppid=1 pid=6055 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:17.609000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:45:17.634906 kernel: audit: type=1327 audit(1768347917.609:928): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:45:17.642573 systemd-logind[1947]: New session 26 of user core. Jan 13 23:45:17.646592 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 23:45:17.660000 audit[6055]: USER_START pid=6055 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:17.666000 audit[6059]: CRED_ACQ pid=6059 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:17.674445 kernel: audit: type=1105 audit(1768347917.660:929): pid=6055 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:17.674784 kernel: audit: type=1103 audit(1768347917.666:930): pid=6059 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:17.916181 containerd[1990]: time="2026-01-13T23:45:17.916066867Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:45:17.917842 containerd[1990]: time="2026-01-13T23:45:17.917624287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:45:17.917842 containerd[1990]: time="2026-01-13T23:45:17.917764723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:45:17.920179 kubelet[3562]: E0113 23:45:17.918609 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:45:17.920179 kubelet[3562]: E0113 23:45:17.918685 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:45:17.920179 kubelet[3562]: E0113 23:45:17.918905 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dgwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bc5d5895-g9wvb_calico-apiserver(d529d459-4c8c-4f5e-b8a4-f53690574272): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:45:17.921442 kubelet[3562]: E0113 23:45:17.921238 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" podUID="d529d459-4c8c-4f5e-b8a4-f53690574272" Jan 13 23:45:18.048329 sshd[6059]: Connection closed by 20.161.92.111 port 59034 Jan 13 23:45:18.049407 sshd-session[6055]: pam_unix(sshd:session): session closed for user core Jan 13 23:45:18.053000 audit[6055]: USER_END pid=6055 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:18.061619 systemd[1]: sshd@24-172.31.22.81:22-20.161.92.111:59034.service: Deactivated successfully. Jan 13 23:45:18.054000 audit[6055]: CRED_DISP pid=6055 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:18.070689 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 23:45:18.072145 kernel: audit: type=1106 audit(1768347918.053:931): pid=6055 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:18.073202 kernel: audit: type=1104 audit(1768347918.054:932): pid=6055 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:18.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.22.81:22-20.161.92.111:59034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.075770 systemd-logind[1947]: Session 26 logged out. Waiting for processes to exit. Jan 13 23:45:18.083862 systemd-logind[1947]: Removed session 26. Jan 13 23:45:18.600528 containerd[1990]: time="2026-01-13T23:45:18.599385282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:45:18.876370 containerd[1990]: time="2026-01-13T23:45:18.876210235Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:45:18.878987 containerd[1990]: time="2026-01-13T23:45:18.878419183Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:45:18.878987 containerd[1990]: time="2026-01-13T23:45:18.878497375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:45:18.880437 kubelet[3562]: E0113 23:45:18.879397 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:45:18.880437 kubelet[3562]: E0113 23:45:18.879456 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:45:18.880437 kubelet[3562]: E0113 23:45:18.879670 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-77d2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bc5d5895-dvsmb_calico-apiserver(7796067b-5cab-42e9-af9d-320bb4208060): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:45:18.880935 kubelet[3562]: E0113 23:45:18.880838 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" podUID="7796067b-5cab-42e9-af9d-320bb4208060" Jan 13 23:45:22.604188 kubelet[3562]: E0113 23:45:22.602021 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zwrvg" podUID="86622233-a85f-41fd-b458-2112644e82b9" Jan 13 23:45:22.604866 containerd[1990]: time="2026-01-13T23:45:22.602186818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:45:22.882251 containerd[1990]: time="2026-01-13T23:45:22.882027863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:45:22.884702 containerd[1990]: time="2026-01-13T23:45:22.884302427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:45:22.884702 containerd[1990]: time="2026-01-13T23:45:22.884372399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:45:22.884922 kubelet[3562]: E0113 23:45:22.884773 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:45:22.884922 kubelet[3562]: E0113 23:45:22.884867 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:45:22.885613 kubelet[3562]: E0113 23:45:22.885384 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksj5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lblk8_calico-system(884e12a9-b4d3-4695-bc91-5cdf1a464d0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:45:22.887281 containerd[1990]: time="2026-01-13T23:45:22.885946535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:45:23.145684 systemd[1]: Started sshd@25-172.31.22.81:22-20.161.92.111:40330.service - OpenSSH per-connection server daemon (20.161.92.111:40330). Jan 13 23:45:23.154569 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:45:23.154683 kernel: audit: type=1130 audit(1768347923.146:934): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.22.81:22-20.161.92.111:40330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:23.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.22.81:22-20.161.92.111:40330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:23.157466 containerd[1990]: time="2026-01-13T23:45:23.157387473Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:45:23.161110 containerd[1990]: time="2026-01-13T23:45:23.161015061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:45:23.161335 containerd[1990]: time="2026-01-13T23:45:23.161063709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:45:23.161454 kubelet[3562]: E0113 23:45:23.161392 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:45:23.161579 kubelet[3562]: E0113 23:45:23.161465 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:45:23.161802 kubelet[3562]: E0113 23:45:23.161727 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d4af49205dc04bf3a26511b093d7fa31,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f4qrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78b56f8d66-w4wmx_calico-system(d5bf06bf-e923-4a15-849e-51c08230b88e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:45:23.163487 containerd[1990]: time="2026-01-13T23:45:23.163410477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:45:23.461613 containerd[1990]: time="2026-01-13T23:45:23.461402650Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:45:23.463763 containerd[1990]: time="2026-01-13T23:45:23.463543078Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:45:23.463763 containerd[1990]: time="2026-01-13T23:45:23.463680634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:45:23.463979 kubelet[3562]: E0113 23:45:23.463892 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:45:23.464055 kubelet[3562]: E0113 23:45:23.463972 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:45:23.464507 containerd[1990]: time="2026-01-13T23:45:23.464457862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:45:23.465171 kubelet[3562]: E0113 23:45:23.465068 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksj5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lblk8_calico-system(884e12a9-b4d3-4695-bc91-5cdf1a464d0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:45:23.466440 kubelet[3562]: E0113 23:45:23.466340 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:45:23.665000 audit[6092]: USER_ACCT pid=6092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:23.672683 sshd[6092]: Accepted publickey for core from 20.161.92.111 port 40330 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:45:23.674195 kernel: audit: type=1101 audit(1768347923.665:935): pid=6092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:23.673000 audit[6092]: CRED_ACQ pid=6092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:23.683953 sshd-session[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:45:23.688526 kernel: audit: type=1103 audit(1768347923.673:936): pid=6092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:23.688795 kernel: audit: type=1006 audit(1768347923.673:937): pid=6092 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 13 23:45:23.673000 audit[6092]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc0ee8b0 a2=3 a3=0 items=0 ppid=1 pid=6092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:23.701210 kernel: audit: type=1300 audit(1768347923.673:937): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc0ee8b0 a2=3 a3=0 items=0 ppid=1 pid=6092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:23.701356 kernel: audit: type=1327 audit(1768347923.673:937): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:45:23.673000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:45:23.711734 systemd-logind[1947]: New session 27 of user core. Jan 13 23:45:23.719791 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 23:45:23.730000 audit[6092]: USER_START pid=6092 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:23.740592 containerd[1990]: time="2026-01-13T23:45:23.740356392Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:45:23.747740 kernel: audit: type=1105 audit(1768347923.730:938): pid=6092 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:23.747889 kernel: audit: type=1103 audit(1768347923.740:939): pid=6096 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:23.740000 audit[6096]: CRED_ACQ pid=6096 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:23.748044 containerd[1990]: time="2026-01-13T23:45:23.747518064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:45:23.748044 containerd[1990]: time="2026-01-13T23:45:23.747661428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:45:23.748866 kubelet[3562]: E0113 23:45:23.748411 3562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:45:23.748866 kubelet[3562]: E0113 23:45:23.748564 3562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:45:23.750230 kubelet[3562]: E0113 23:45:23.749854 3562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4qrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78b56f8d66-w4wmx_calico-system(d5bf06bf-e923-4a15-849e-51c08230b88e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:45:23.752014 kubelet[3562]: E0113 23:45:23.751181 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b56f8d66-w4wmx" podUID="d5bf06bf-e923-4a15-849e-51c08230b88e" Jan 13 23:45:24.085600 sshd[6096]: Connection closed by 20.161.92.111 port 40330 Jan 13 23:45:24.086362 sshd-session[6092]: pam_unix(sshd:session): session closed for user core Jan 13 23:45:24.088000 audit[6092]: USER_END pid=6092 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:24.096000 audit[6092]: CRED_DISP pid=6092 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:24.111068 kernel: audit: type=1106 audit(1768347924.088:940): pid=6092 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:24.111472 kernel: audit: type=1104 audit(1768347924.096:941): pid=6092 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:45:24.106523 systemd[1]: sshd@25-172.31.22.81:22-20.161.92.111:40330.service: Deactivated successfully. Jan 13 23:45:24.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.22.81:22-20.161.92.111:40330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:24.115789 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 23:45:24.123330 systemd-logind[1947]: Session 27 logged out. Waiting for processes to exit. Jan 13 23:45:24.126892 systemd-logind[1947]: Removed session 27. Jan 13 23:45:26.616500 kubelet[3562]: E0113 23:45:26.616401 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" podUID="3317981a-15b4-41f8-a3cf-26fbd9c6fbf1" Jan 13 23:45:29.598669 kubelet[3562]: E0113 23:45:29.598579 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" podUID="7796067b-5cab-42e9-af9d-320bb4208060" Jan 13 23:45:29.601539 kubelet[3562]: E0113 23:45:29.601453 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" podUID="db70aac6-82d4-4ef8-98ae-1ad4091dd76e" Jan 13 23:45:31.597956 kubelet[3562]: E0113 23:45:31.597872 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" podUID="d529d459-4c8c-4f5e-b8a4-f53690574272" Jan 13 23:45:32.603284 systemd[1]: Started sshd@26-172.31.22.81:22-188.166.14.151:38814.service - OpenSSH per-connection server daemon (188.166.14.151:38814). Jan 13 23:45:32.612439 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:45:32.612834 kernel: audit: type=1130 audit(1768347932.602:943): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.22.81:22-188.166.14.151:38814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:32.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.22.81:22-188.166.14.151:38814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:32.829479 sshd[6109]: Connection closed by 188.166.14.151 port 38814 Jan 13 23:45:32.831754 systemd[1]: sshd@26-172.31.22.81:22-188.166.14.151:38814.service: Deactivated successfully. Jan 13 23:45:32.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.22.81:22-188.166.14.151:38814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:32.839190 kernel: audit: type=1131 audit(1768347932.831:944): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.22.81:22-188.166.14.151:38814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:34.598911 kubelet[3562]: E0113 23:45:34.598794 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zwrvg" podUID="86622233-a85f-41fd-b458-2112644e82b9" Jan 13 23:45:35.599809 kubelet[3562]: E0113 23:45:35.599679 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:45:35.600807 kubelet[3562]: E0113 23:45:35.600334 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b56f8d66-w4wmx" podUID="d5bf06bf-e923-4a15-849e-51c08230b88e" Jan 13 23:45:38.395434 systemd[1]: cri-containerd-eca446fae923162551c4163c148dd68a37a60a1fab5a298db9905446e9dff934.scope: Deactivated successfully. Jan 13 23:45:38.396778 systemd[1]: cri-containerd-eca446fae923162551c4163c148dd68a37a60a1fab5a298db9905446e9dff934.scope: Consumed 6.684s CPU time, 57.5M memory peak. Jan 13 23:45:38.398000 audit: BPF prog-id=115 op=UNLOAD Jan 13 23:45:38.407248 kernel: audit: type=1334 audit(1768347938.398:945): prog-id=115 op=UNLOAD Jan 13 23:45:38.407498 kernel: audit: type=1334 audit(1768347938.398:946): prog-id=119 op=UNLOAD Jan 13 23:45:38.398000 audit: BPF prog-id=119 op=UNLOAD Jan 13 23:45:38.400000 audit: BPF prog-id=268 op=LOAD Jan 13 23:45:38.409388 containerd[1990]: time="2026-01-13T23:45:38.408601764Z" level=info msg="received container exit event container_id:\"eca446fae923162551c4163c148dd68a37a60a1fab5a298db9905446e9dff934\" id:\"eca446fae923162551c4163c148dd68a37a60a1fab5a298db9905446e9dff934\" pid:3139 exit_status:1 exited_at:{seconds:1768347938 nanos:407890392}" Jan 13 23:45:38.410354 kernel: audit: type=1334 audit(1768347938.400:947): prog-id=268 op=LOAD Jan 13 23:45:38.411175 kernel: audit: type=1334 audit(1768347938.400:948): prog-id=100 op=UNLOAD Jan 13 23:45:38.400000 audit: BPF prog-id=100 op=UNLOAD Jan 13 23:45:38.460542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eca446fae923162551c4163c148dd68a37a60a1fab5a298db9905446e9dff934-rootfs.mount: Deactivated successfully. Jan 13 23:45:38.489235 kubelet[3562]: I0113 23:45:38.489190 3562 scope.go:117] "RemoveContainer" containerID="eca446fae923162551c4163c148dd68a37a60a1fab5a298db9905446e9dff934" Jan 13 23:45:38.494775 containerd[1990]: time="2026-01-13T23:45:38.494691193Z" level=info msg="CreateContainer within sandbox \"4a534ff9cb1fe5e121e4965c693ee674868d5352c0cd56707177c822b55aa93e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 23:45:38.518161 containerd[1990]: time="2026-01-13T23:45:38.515448841Z" level=info msg="Container 8f80523ae8cccd7d67ca870b6dc44312284f44b3955af270397bbb2695cab054: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:45:38.526957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1181497819.mount: Deactivated successfully. Jan 13 23:45:38.540185 containerd[1990]: time="2026-01-13T23:45:38.540067933Z" level=info msg="CreateContainer within sandbox \"4a534ff9cb1fe5e121e4965c693ee674868d5352c0cd56707177c822b55aa93e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8f80523ae8cccd7d67ca870b6dc44312284f44b3955af270397bbb2695cab054\"" Jan 13 23:45:38.541632 containerd[1990]: time="2026-01-13T23:45:38.541578313Z" level=info msg="StartContainer for \"8f80523ae8cccd7d67ca870b6dc44312284f44b3955af270397bbb2695cab054\"" Jan 13 23:45:38.544323 containerd[1990]: time="2026-01-13T23:45:38.544263121Z" level=info msg="connecting to shim 8f80523ae8cccd7d67ca870b6dc44312284f44b3955af270397bbb2695cab054" address="unix:///run/containerd/s/a3fdfed39eb1a27873e657998e2eeea3e5ac1ff8e0e2eee5cd17ddf0cf13b294" protocol=ttrpc version=3 Jan 13 23:45:38.592510 systemd[1]: Started cri-containerd-8f80523ae8cccd7d67ca870b6dc44312284f44b3955af270397bbb2695cab054.scope - libcontainer container 8f80523ae8cccd7d67ca870b6dc44312284f44b3955af270397bbb2695cab054. Jan 13 23:45:38.624000 audit: BPF prog-id=269 op=LOAD Jan 13 23:45:38.626000 audit: BPF prog-id=270 op=LOAD Jan 13 23:45:38.629768 kernel: audit: type=1334 audit(1768347938.624:949): prog-id=269 op=LOAD Jan 13 23:45:38.629846 kernel: audit: type=1334 audit(1768347938.626:950): prog-id=270 op=LOAD Jan 13 23:45:38.626000 audit[6125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2993 pid=6125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:38.637048 kernel: audit: type=1300 audit(1768347938.626:950): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2993 pid=6125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:38.643481 kernel: audit: type=1327 audit(1768347938.626:950): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866383035323361653863636364376436376361383730623664633434 Jan 13 23:45:38.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866383035323361653863636364376436376361383730623664633434 Jan 13 23:45:38.626000 audit: BPF prog-id=270 op=UNLOAD Jan 13 23:45:38.626000 audit[6125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=6125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:38.646220 kernel: audit: type=1334 audit(1768347938.626:951): prog-id=270 op=UNLOAD Jan 13 23:45:38.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866383035323361653863636364376436376361383730623664633434 Jan 13 23:45:38.654149 kernel: audit: type=1300 audit(1768347938.626:951): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=6125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:38.627000 audit: BPF prog-id=271 op=LOAD Jan 13 23:45:38.627000 audit[6125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2993 pid=6125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:38.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866383035323361653863636364376436376361383730623664633434 Jan 13 23:45:38.628000 audit: BPF prog-id=272 op=LOAD Jan 13 23:45:38.628000 audit[6125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=2993 pid=6125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:38.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866383035323361653863636364376436376361383730623664633434 Jan 13 23:45:38.628000 audit: BPF prog-id=272 op=UNLOAD Jan 13 23:45:38.628000 audit[6125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=6125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:38.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866383035323361653863636364376436376361383730623664633434 Jan 13 23:45:38.628000 audit: BPF prog-id=271 op=UNLOAD Jan 13 23:45:38.628000 audit[6125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=6125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:38.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866383035323361653863636364376436376361383730623664633434 Jan 13 23:45:38.628000 audit: BPF prog-id=273 op=LOAD Jan 13 23:45:38.628000 audit[6125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=2993 pid=6125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:38.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866383035323361653863636364376436376361383730623664633434 Jan 13 23:45:38.702811 containerd[1990]: time="2026-01-13T23:45:38.702680990Z" level=info msg="StartContainer for \"8f80523ae8cccd7d67ca870b6dc44312284f44b3955af270397bbb2695cab054\" returns successfully" Jan 13 23:45:39.059922 systemd[1]: cri-containerd-3850581de1f61303d3e6b0cce6dca349c84a1bcc90a8fb5c33eb5540c7cf61ae.scope: Deactivated successfully. Jan 13 23:45:39.060566 systemd[1]: cri-containerd-3850581de1f61303d3e6b0cce6dca349c84a1bcc90a8fb5c33eb5540c7cf61ae.scope: Consumed 29.802s CPU time, 104.3M memory peak. Jan 13 23:45:39.063000 audit: BPF prog-id=153 op=UNLOAD Jan 13 23:45:39.063000 audit: BPF prog-id=157 op=UNLOAD Jan 13 23:45:39.068894 containerd[1990]: time="2026-01-13T23:45:39.068819604Z" level=info msg="received container exit event container_id:\"3850581de1f61303d3e6b0cce6dca349c84a1bcc90a8fb5c33eb5540c7cf61ae\" id:\"3850581de1f61303d3e6b0cce6dca349c84a1bcc90a8fb5c33eb5540c7cf61ae\" pid:3879 exit_status:1 exited_at:{seconds:1768347939 nanos:67810572}" Jan 13 23:45:39.123253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3850581de1f61303d3e6b0cce6dca349c84a1bcc90a8fb5c33eb5540c7cf61ae-rootfs.mount: Deactivated successfully. Jan 13 23:45:39.497202 kubelet[3562]: I0113 23:45:39.497151 3562 scope.go:117] "RemoveContainer" containerID="3850581de1f61303d3e6b0cce6dca349c84a1bcc90a8fb5c33eb5540c7cf61ae" Jan 13 23:45:39.502175 containerd[1990]: time="2026-01-13T23:45:39.502093322Z" level=info msg="CreateContainer within sandbox \"d42c7ea173d67ae600f6ec0b3bb0f897a85e7c5ea5da9496a11ff3db1c890bb4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 13 23:45:39.526523 containerd[1990]: time="2026-01-13T23:45:39.526423766Z" level=info msg="Container 81146bddbfa3a878b52a8aa6e822b98e27bb8e7a46d82ba0d41f6d3fcf8d8e3a: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:45:39.550369 containerd[1990]: time="2026-01-13T23:45:39.550269122Z" level=info msg="CreateContainer within sandbox \"d42c7ea173d67ae600f6ec0b3bb0f897a85e7c5ea5da9496a11ff3db1c890bb4\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"81146bddbfa3a878b52a8aa6e822b98e27bb8e7a46d82ba0d41f6d3fcf8d8e3a\"" Jan 13 23:45:39.551284 containerd[1990]: time="2026-01-13T23:45:39.551205494Z" level=info msg="StartContainer for \"81146bddbfa3a878b52a8aa6e822b98e27bb8e7a46d82ba0d41f6d3fcf8d8e3a\"" Jan 13 23:45:39.554344 containerd[1990]: time="2026-01-13T23:45:39.554210342Z" level=info msg="connecting to shim 81146bddbfa3a878b52a8aa6e822b98e27bb8e7a46d82ba0d41f6d3fcf8d8e3a" address="unix:///run/containerd/s/37cfa305a5ff2b2e2d899bce2f6bbdf9ca155eff281f2112a145d70c6b0945cf" protocol=ttrpc version=3 Jan 13 23:45:39.623530 systemd[1]: Started cri-containerd-81146bddbfa3a878b52a8aa6e822b98e27bb8e7a46d82ba0d41f6d3fcf8d8e3a.scope - libcontainer container 81146bddbfa3a878b52a8aa6e822b98e27bb8e7a46d82ba0d41f6d3fcf8d8e3a. Jan 13 23:45:39.653000 audit: BPF prog-id=274 op=LOAD Jan 13 23:45:39.655000 audit: BPF prog-id=275 op=LOAD Jan 13 23:45:39.655000 audit[6166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=3681 pid=6166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:39.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831313436626464626661336138373862353261386161366538323262 Jan 13 23:45:39.655000 audit: BPF prog-id=275 op=UNLOAD Jan 13 23:45:39.655000 audit[6166]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3681 pid=6166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:39.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831313436626464626661336138373862353261386161366538323262 Jan 13 23:45:39.656000 audit: BPF prog-id=276 op=LOAD Jan 13 23:45:39.656000 audit[6166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=3681 pid=6166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:39.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831313436626464626661336138373862353261386161366538323262 Jan 13 23:45:39.657000 audit: BPF prog-id=277 op=LOAD Jan 13 23:45:39.657000 audit[6166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=3681 pid=6166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:39.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831313436626464626661336138373862353261386161366538323262 Jan 13 23:45:39.657000 audit: BPF prog-id=277 op=UNLOAD Jan 13 23:45:39.657000 audit[6166]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3681 pid=6166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:39.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831313436626464626661336138373862353261386161366538323262 Jan 13 23:45:39.657000 audit: BPF prog-id=276 op=UNLOAD Jan 13 23:45:39.657000 audit[6166]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3681 pid=6166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:39.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831313436626464626661336138373862353261386161366538323262 Jan 13 23:45:39.657000 audit: BPF prog-id=278 op=LOAD Jan 13 23:45:39.657000 audit[6166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=3681 pid=6166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:39.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831313436626464626661336138373862353261386161366538323262 Jan 13 23:45:39.717639 containerd[1990]: time="2026-01-13T23:45:39.717571467Z" level=info msg="StartContainer for \"81146bddbfa3a878b52a8aa6e822b98e27bb8e7a46d82ba0d41f6d3fcf8d8e3a\" returns successfully" Jan 13 23:45:40.600356 kubelet[3562]: E0113 23:45:40.600258 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" podUID="db70aac6-82d4-4ef8-98ae-1ad4091dd76e" Jan 13 23:45:41.597465 kubelet[3562]: E0113 23:45:41.597389 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" podUID="3317981a-15b4-41f8-a3cf-26fbd9c6fbf1" Jan 13 23:45:41.840795 kubelet[3562]: E0113 23:45:41.840318 3562 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-81?timeout=10s\": context deadline exceeded" Jan 13 23:45:43.562883 systemd[1]: cri-containerd-3057bc6709d68a18b066fc2e3858f2a7488a6747542a8e0c5b33ee96d97c1913.scope: Deactivated successfully. Jan 13 23:45:43.565344 systemd[1]: cri-containerd-3057bc6709d68a18b066fc2e3858f2a7488a6747542a8e0c5b33ee96d97c1913.scope: Consumed 4.662s CPU time, 20.3M memory peak. Jan 13 23:45:43.569810 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 13 23:45:43.569977 kernel: audit: type=1334 audit(1768347943.565:967): prog-id=110 op=UNLOAD Jan 13 23:45:43.565000 audit: BPF prog-id=110 op=UNLOAD Jan 13 23:45:43.565000 audit: BPF prog-id=114 op=UNLOAD Jan 13 23:45:43.573246 kernel: audit: type=1334 audit(1768347943.565:968): prog-id=114 op=UNLOAD Jan 13 23:45:43.573549 kernel: audit: type=1334 audit(1768347943.569:969): prog-id=279 op=LOAD Jan 13 23:45:43.569000 audit: BPF prog-id=279 op=LOAD Jan 13 23:45:43.569000 audit: BPF prog-id=95 op=UNLOAD Jan 13 23:45:43.576661 kernel: audit: type=1334 audit(1768347943.569:970): prog-id=95 op=UNLOAD Jan 13 23:45:43.577264 containerd[1990]: time="2026-01-13T23:45:43.577119858Z" level=info msg="received container exit event container_id:\"3057bc6709d68a18b066fc2e3858f2a7488a6747542a8e0c5b33ee96d97c1913\" id:\"3057bc6709d68a18b066fc2e3858f2a7488a6747542a8e0c5b33ee96d97c1913\" pid:3127 exit_status:1 exited_at:{seconds:1768347943 nanos:576052542}" Jan 13 23:45:43.598114 kubelet[3562]: E0113 23:45:43.597834 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" podUID="7796067b-5cab-42e9-af9d-320bb4208060" Jan 13 23:45:43.638796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3057bc6709d68a18b066fc2e3858f2a7488a6747542a8e0c5b33ee96d97c1913-rootfs.mount: Deactivated successfully. Jan 13 23:45:44.538904 kubelet[3562]: I0113 23:45:44.538850 3562 scope.go:117] "RemoveContainer" containerID="3057bc6709d68a18b066fc2e3858f2a7488a6747542a8e0c5b33ee96d97c1913" Jan 13 23:45:44.543162 containerd[1990]: time="2026-01-13T23:45:44.543075415Z" level=info msg="CreateContainer within sandbox \"df94e9e5d1a5318d45488a016147ed4f7f9aa83845193a34354fa2e7d0b8943c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 23:45:44.564163 containerd[1990]: time="2026-01-13T23:45:44.561521659Z" level=info msg="Container 6b02c8666e13dd35d75ed01d33cc9c4b1543b53cc00fbe8c15cf62117d8916b9: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:45:44.583898 containerd[1990]: time="2026-01-13T23:45:44.583837567Z" level=info msg="CreateContainer within sandbox \"df94e9e5d1a5318d45488a016147ed4f7f9aa83845193a34354fa2e7d0b8943c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6b02c8666e13dd35d75ed01d33cc9c4b1543b53cc00fbe8c15cf62117d8916b9\"" Jan 13 23:45:44.585248 containerd[1990]: time="2026-01-13T23:45:44.585194455Z" level=info msg="StartContainer for \"6b02c8666e13dd35d75ed01d33cc9c4b1543b53cc00fbe8c15cf62117d8916b9\"" Jan 13 23:45:44.587450 containerd[1990]: time="2026-01-13T23:45:44.587371675Z" level=info msg="connecting to shim 6b02c8666e13dd35d75ed01d33cc9c4b1543b53cc00fbe8c15cf62117d8916b9" address="unix:///run/containerd/s/a30b9eae1753983dbd6670409d3ffef0ea865265fe0d6793784eb73617bcfb13" protocol=ttrpc version=3 Jan 13 23:45:44.631469 systemd[1]: Started cri-containerd-6b02c8666e13dd35d75ed01d33cc9c4b1543b53cc00fbe8c15cf62117d8916b9.scope - libcontainer container 6b02c8666e13dd35d75ed01d33cc9c4b1543b53cc00fbe8c15cf62117d8916b9. Jan 13 23:45:44.658000 audit: BPF prog-id=280 op=LOAD Jan 13 23:45:44.658000 audit: BPF prog-id=281 op=LOAD Jan 13 23:45:44.663247 kernel: audit: type=1334 audit(1768347944.658:971): prog-id=280 op=LOAD Jan 13 23:45:44.669589 kernel: audit: type=1334 audit(1768347944.658:972): prog-id=281 op=LOAD Jan 13 23:45:44.669675 kernel: audit: type=1300 audit(1768347944.658:972): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3000 pid=6233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:44.658000 audit[6233]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3000 pid=6233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:44.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303263383636366531336464333564373565643031643333636339 Jan 13 23:45:44.676004 kernel: audit: type=1327 audit(1768347944.658:972): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303263383636366531336464333564373565643031643333636339 Jan 13 23:45:44.661000 audit: BPF prog-id=281 op=UNLOAD Jan 13 23:45:44.678016 kernel: audit: type=1334 audit(1768347944.661:973): prog-id=281 op=UNLOAD Jan 13 23:45:44.678256 kernel: audit: type=1300 audit(1768347944.661:973): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3000 pid=6233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:44.661000 audit[6233]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3000 pid=6233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:44.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303263383636366531336464333564373565643031643333636339 Jan 13 23:45:44.661000 audit: BPF prog-id=282 op=LOAD Jan 13 23:45:44.661000 audit[6233]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3000 pid=6233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:44.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303263383636366531336464333564373565643031643333636339 Jan 13 23:45:44.662000 audit: BPF prog-id=283 op=LOAD Jan 13 23:45:44.662000 audit[6233]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3000 pid=6233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:44.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303263383636366531336464333564373565643031643333636339 Jan 13 23:45:44.675000 audit: BPF prog-id=283 op=UNLOAD Jan 13 23:45:44.675000 audit[6233]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3000 pid=6233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:44.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303263383636366531336464333564373565643031643333636339 Jan 13 23:45:44.675000 audit: BPF prog-id=282 op=UNLOAD Jan 13 23:45:44.675000 audit[6233]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3000 pid=6233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:44.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303263383636366531336464333564373565643031643333636339 Jan 13 23:45:44.675000 audit: BPF prog-id=284 op=LOAD Jan 13 23:45:44.675000 audit[6233]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3000 pid=6233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:44.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303263383636366531336464333564373565643031643333636339 Jan 13 23:45:44.754200 containerd[1990]: time="2026-01-13T23:45:44.754100528Z" level=info msg="StartContainer for \"6b02c8666e13dd35d75ed01d33cc9c4b1543b53cc00fbe8c15cf62117d8916b9\" returns successfully" Jan 13 23:45:46.599667 kubelet[3562]: E0113 23:45:46.599603 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" podUID="d529d459-4c8c-4f5e-b8a4-f53690574272" Jan 13 23:45:48.598687 kubelet[3562]: E0113 23:45:48.598583 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zwrvg" podUID="86622233-a85f-41fd-b458-2112644e82b9" Jan 13 23:45:49.599579 kubelet[3562]: E0113 23:45:49.599502 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b56f8d66-w4wmx" podUID="d5bf06bf-e923-4a15-849e-51c08230b88e" Jan 13 23:45:50.598702 kubelet[3562]: E0113 23:45:50.598575 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lblk8" podUID="884e12a9-b4d3-4695-bc91-5cdf1a464d0b" Jan 13 23:45:51.189044 systemd[1]: cri-containerd-81146bddbfa3a878b52a8aa6e822b98e27bb8e7a46d82ba0d41f6d3fcf8d8e3a.scope: Deactivated successfully. Jan 13 23:45:51.191000 audit: BPF prog-id=274 op=UNLOAD Jan 13 23:45:51.193687 containerd[1990]: time="2026-01-13T23:45:51.193336704Z" level=info msg="received container exit event container_id:\"81146bddbfa3a878b52a8aa6e822b98e27bb8e7a46d82ba0d41f6d3fcf8d8e3a\" id:\"81146bddbfa3a878b52a8aa6e822b98e27bb8e7a46d82ba0d41f6d3fcf8d8e3a\" pid:6179 exit_status:1 exited_at:{seconds:1768347951 nanos:191806728}" Jan 13 23:45:51.194808 kernel: kauditd_printk_skb: 16 callbacks suppressed Jan 13 23:45:51.194940 kernel: audit: type=1334 audit(1768347951.191:979): prog-id=274 op=UNLOAD Jan 13 23:45:51.191000 audit: BPF prog-id=278 op=UNLOAD Jan 13 23:45:51.198268 kernel: audit: type=1334 audit(1768347951.191:980): prog-id=278 op=UNLOAD Jan 13 23:45:51.241011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81146bddbfa3a878b52a8aa6e822b98e27bb8e7a46d82ba0d41f6d3fcf8d8e3a-rootfs.mount: Deactivated successfully. Jan 13 23:45:51.571006 kubelet[3562]: I0113 23:45:51.570422 3562 scope.go:117] "RemoveContainer" containerID="3850581de1f61303d3e6b0cce6dca349c84a1bcc90a8fb5c33eb5540c7cf61ae" Jan 13 23:45:51.571764 kubelet[3562]: I0113 23:45:51.571533 3562 scope.go:117] "RemoveContainer" containerID="81146bddbfa3a878b52a8aa6e822b98e27bb8e7a46d82ba0d41f6d3fcf8d8e3a" Jan 13 23:45:51.571843 kubelet[3562]: E0113 23:45:51.571789 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-cw8vb_tigera-operator(763e250c-d2d7-4565-84db-0fa178ef7c13)\"" pod="tigera-operator/tigera-operator-7dcd859c48-cw8vb" podUID="763e250c-d2d7-4565-84db-0fa178ef7c13" Jan 13 23:45:51.576163 containerd[1990]: time="2026-01-13T23:45:51.576020222Z" level=info msg="RemoveContainer for \"3850581de1f61303d3e6b0cce6dca349c84a1bcc90a8fb5c33eb5540c7cf61ae\"" Jan 13 23:45:51.587722 containerd[1990]: time="2026-01-13T23:45:51.587641814Z" level=info msg="RemoveContainer for \"3850581de1f61303d3e6b0cce6dca349c84a1bcc90a8fb5c33eb5540c7cf61ae\" returns successfully" Jan 13 23:45:51.842812 kubelet[3562]: E0113 23:45:51.841507 3562 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-81?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 23:45:52.598861 kubelet[3562]: E0113 23:45:52.598754 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c9dddc9f7-sd8kc" podUID="db70aac6-82d4-4ef8-98ae-1ad4091dd76e" Jan 13 23:45:52.601879 kubelet[3562]: E0113 23:45:52.601828 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b8d4f74f9-kqpzz" podUID="3317981a-15b4-41f8-a3cf-26fbd9c6fbf1" Jan 13 23:45:57.597848 kubelet[3562]: E0113 23:45:57.597765 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-dvsmb" podUID="7796067b-5cab-42e9-af9d-320bb4208060" Jan 13 23:45:58.598540 kubelet[3562]: E0113 23:45:58.598428 3562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bc5d5895-g9wvb" podUID="d529d459-4c8c-4f5e-b8a4-f53690574272"