Jan 13 23:45:17.990216 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 13 23:45:17.990271 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Jan 13 22:00:26 -00 2026 Jan 13 23:45:17.990298 kernel: KASLR disabled due to lack of seed Jan 13 23:45:17.990315 kernel: efi: EFI v2.7 by EDK II Jan 13 23:45:17.990331 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78557598 Jan 13 23:45:17.990347 kernel: secureboot: Secure boot disabled Jan 13 23:45:17.990366 kernel: ACPI: Early table checksum verification disabled Jan 13 23:45:17.990382 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 13 23:45:17.990398 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 13 23:45:17.990418 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 23:45:17.990435 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 13 23:45:17.990451 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 13 23:45:17.990467 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 23:45:17.990484 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 13 23:45:17.990507 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 13 23:45:17.990524 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 13 23:45:17.990542 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 23:45:17.990559 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 13 23:45:17.990576 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 13 23:45:17.990593 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 13 23:45:17.990611 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 13 23:45:17.990628 kernel: printk: legacy bootconsole [uart0] enabled Jan 13 23:45:17.990644 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 13 23:45:17.990662 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 23:45:17.990683 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Jan 13 23:45:17.990700 kernel: Zone ranges: Jan 13 23:45:17.990717 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 23:45:17.990733 kernel: DMA32 empty Jan 13 23:45:17.990750 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 13 23:45:17.990767 kernel: Device empty Jan 13 23:45:17.990783 kernel: Movable zone start for each node Jan 13 23:45:17.990800 kernel: Early memory node ranges Jan 13 23:45:17.990817 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 13 23:45:17.990833 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 13 23:45:17.990850 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 13 23:45:17.990867 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 13 23:45:17.990888 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 13 23:45:17.990905 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 13 23:45:17.990922 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 13 23:45:17.990939 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 13 23:45:17.990963 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 23:45:17.990985 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 13 23:45:17.991003 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 13 23:45:17.991020 kernel: psci: probing for conduit method from ACPI. Jan 13 23:45:17.991038 kernel: psci: PSCIv1.0 detected in firmware. Jan 13 23:45:17.991055 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 23:45:17.991109 kernel: psci: Trusted OS migration not required Jan 13 23:45:17.991128 kernel: psci: SMC Calling Convention v1.1 Jan 13 23:45:17.991146 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 13 23:45:17.991164 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 13 23:45:17.991187 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 13 23:45:17.991206 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 23:45:17.991223 kernel: Detected PIPT I-cache on CPU0 Jan 13 23:45:17.991241 kernel: CPU features: detected: GIC system register CPU interface Jan 13 23:45:17.991258 kernel: CPU features: detected: Spectre-v2 Jan 13 23:45:17.991276 kernel: CPU features: detected: Spectre-v3a Jan 13 23:45:17.991293 kernel: CPU features: detected: Spectre-BHB Jan 13 23:45:17.991311 kernel: CPU features: detected: ARM erratum 1742098 Jan 13 23:45:17.991328 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 13 23:45:17.991346 kernel: alternatives: applying boot alternatives Jan 13 23:45:17.991365 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=3d3f73de8d2693594dfefd279d2c8d77c282a05a4cbc54177503d31784261f6b Jan 13 23:45:17.991389 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 23:45:17.991407 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 23:45:17.991424 kernel: Fallback order for Node 0: 0 Jan 13 23:45:17.991442 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 13 23:45:17.991459 kernel: Policy zone: Normal Jan 13 23:45:17.991497 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 23:45:17.991518 kernel: software IO TLB: area num 2. Jan 13 23:45:17.991536 kernel: software IO TLB: mapped [mem 0x000000006f800000-0x0000000073800000] (64MB) Jan 13 23:45:17.991554 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 23:45:17.991572 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 23:45:17.991596 kernel: rcu: RCU event tracing is enabled. Jan 13 23:45:17.991615 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 23:45:17.991633 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 23:45:17.991651 kernel: Tracing variant of Tasks RCU enabled. Jan 13 23:45:17.991669 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 23:45:17.991686 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 23:45:17.991704 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 23:45:17.991722 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 23:45:17.991740 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 23:45:17.991757 kernel: GICv3: 96 SPIs implemented Jan 13 23:45:17.991774 kernel: GICv3: 0 Extended SPIs implemented Jan 13 23:45:17.991797 kernel: Root IRQ handler: gic_handle_irq Jan 13 23:45:17.991814 kernel: GICv3: GICv3 features: 16 PPIs Jan 13 23:45:17.991832 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 13 23:45:17.991849 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 13 23:45:17.991867 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 13 23:45:17.991885 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 23:45:17.991903 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 13 23:45:17.991921 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 13 23:45:17.991939 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 13 23:45:17.991956 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 13 23:45:17.991974 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 23:45:17.991999 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 13 23:45:17.992018 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 13 23:45:17.992038 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 13 23:45:17.992056 kernel: Console: colour dummy device 80x25 Jan 13 23:45:17.992096 kernel: printk: legacy console [tty1] enabled Jan 13 23:45:17.992116 kernel: ACPI: Core revision 20240827 Jan 13 23:45:17.992135 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 13 23:45:17.992154 kernel: pid_max: default: 32768 minimum: 301 Jan 13 23:45:17.992179 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 13 23:45:17.992197 kernel: landlock: Up and running. Jan 13 23:45:17.992216 kernel: SELinux: Initializing. Jan 13 23:45:17.992234 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 23:45:17.992252 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 23:45:17.992271 kernel: rcu: Hierarchical SRCU implementation. Jan 13 23:45:17.992290 kernel: rcu: Max phase no-delay instances is 400. Jan 13 23:45:17.992308 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 13 23:45:17.992331 kernel: Remapping and enabling EFI services. Jan 13 23:45:17.992350 kernel: smp: Bringing up secondary CPUs ... Jan 13 23:45:17.992368 kernel: Detected PIPT I-cache on CPU1 Jan 13 23:45:17.992386 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 13 23:45:17.992405 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 13 23:45:17.992423 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 13 23:45:17.992441 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 23:45:17.992464 kernel: SMP: Total of 2 processors activated. Jan 13 23:45:17.992482 kernel: CPU: All CPU(s) started at EL1 Jan 13 23:45:17.992511 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 23:45:17.992535 kernel: CPU features: detected: 32-bit EL1 Support Jan 13 23:45:17.992554 kernel: CPU features: detected: CRC32 instructions Jan 13 23:45:17.992573 kernel: alternatives: applying system-wide alternatives Jan 13 23:45:17.992593 kernel: Memory: 3823400K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12480K init, 1038K bss, 185716K reserved, 16384K cma-reserved) Jan 13 23:45:17.992613 kernel: devtmpfs: initialized Jan 13 23:45:17.992637 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 23:45:17.992656 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 23:45:17.992676 kernel: 23648 pages in range for non-PLT usage Jan 13 23:45:17.992695 kernel: 515168 pages in range for PLT usage Jan 13 23:45:17.992714 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 23:45:17.992737 kernel: SMBIOS 3.0.0 present. Jan 13 23:45:17.992755 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 13 23:45:17.992774 kernel: DMI: Memory slots populated: 0/0 Jan 13 23:45:17.992793 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 23:45:17.992813 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 23:45:17.992832 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 23:45:17.992852 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 23:45:17.992876 kernel: audit: initializing netlink subsys (disabled) Jan 13 23:45:17.992895 kernel: audit: type=2000 audit(0.224:1): state=initialized audit_enabled=0 res=1 Jan 13 23:45:17.992914 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 23:45:17.992933 kernel: cpuidle: using governor menu Jan 13 23:45:17.992952 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 23:45:17.992971 kernel: ASID allocator initialised with 65536 entries Jan 13 23:45:17.992990 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 23:45:17.993013 kernel: Serial: AMBA PL011 UART driver Jan 13 23:45:17.993033 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 23:45:17.993052 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 23:45:17.993094 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 23:45:17.993115 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 23:45:17.993135 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 23:45:17.993153 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 23:45:17.993178 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 23:45:17.993198 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 23:45:17.993216 kernel: ACPI: Added _OSI(Module Device) Jan 13 23:45:17.993235 kernel: ACPI: Added _OSI(Processor Device) Jan 13 23:45:17.993254 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 23:45:17.993274 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 23:45:17.993292 kernel: ACPI: Interpreter enabled Jan 13 23:45:17.993316 kernel: ACPI: Using GIC for interrupt routing Jan 13 23:45:17.993335 kernel: ACPI: MCFG table detected, 1 entries Jan 13 23:45:17.993354 kernel: ACPI: CPU0 has been hot-added Jan 13 23:45:17.993372 kernel: ACPI: CPU1 has been hot-added Jan 13 23:45:17.993391 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 13 23:45:17.993798 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 23:45:17.994086 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 23:45:17.994360 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 23:45:17.994612 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 13 23:45:17.994865 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 13 23:45:17.994890 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 13 23:45:17.994910 kernel: acpiphp: Slot [1] registered Jan 13 23:45:17.994929 kernel: acpiphp: Slot [2] registered Jan 13 23:45:17.994955 kernel: acpiphp: Slot [3] registered Jan 13 23:45:17.994974 kernel: acpiphp: Slot [4] registered Jan 13 23:45:17.994993 kernel: acpiphp: Slot [5] registered Jan 13 23:45:17.995012 kernel: acpiphp: Slot [6] registered Jan 13 23:45:17.995031 kernel: acpiphp: Slot [7] registered Jan 13 23:45:17.995049 kernel: acpiphp: Slot [8] registered Jan 13 23:45:17.995099 kernel: acpiphp: Slot [9] registered Jan 13 23:45:17.995122 kernel: acpiphp: Slot [10] registered Jan 13 23:45:17.995147 kernel: acpiphp: Slot [11] registered Jan 13 23:45:17.995166 kernel: acpiphp: Slot [12] registered Jan 13 23:45:17.995185 kernel: acpiphp: Slot [13] registered Jan 13 23:45:17.995204 kernel: acpiphp: Slot [14] registered Jan 13 23:45:17.995223 kernel: acpiphp: Slot [15] registered Jan 13 23:45:17.995242 kernel: acpiphp: Slot [16] registered Jan 13 23:45:17.995260 kernel: acpiphp: Slot [17] registered Jan 13 23:45:17.995284 kernel: acpiphp: Slot [18] registered Jan 13 23:45:17.995303 kernel: acpiphp: Slot [19] registered Jan 13 23:45:17.995322 kernel: acpiphp: Slot [20] registered Jan 13 23:45:17.995341 kernel: acpiphp: Slot [21] registered Jan 13 23:45:17.995359 kernel: acpiphp: Slot [22] registered Jan 13 23:45:17.995378 kernel: acpiphp: Slot [23] registered Jan 13 23:45:17.995397 kernel: acpiphp: Slot [24] registered Jan 13 23:45:17.995420 kernel: acpiphp: Slot [25] registered Jan 13 23:45:17.995439 kernel: acpiphp: Slot [26] registered Jan 13 23:45:17.995458 kernel: acpiphp: Slot [27] registered Jan 13 23:45:17.995494 kernel: acpiphp: Slot [28] registered Jan 13 23:45:17.995517 kernel: acpiphp: Slot [29] registered Jan 13 23:45:17.995535 kernel: acpiphp: Slot [30] registered Jan 13 23:45:17.995554 kernel: acpiphp: Slot [31] registered Jan 13 23:45:17.995573 kernel: PCI host bridge to bus 0000:00 Jan 13 23:45:17.995841 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 13 23:45:17.996098 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 23:45:17.996336 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 13 23:45:17.996565 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 13 23:45:17.996846 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 13 23:45:17.997159 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 13 23:45:17.997420 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 13 23:45:17.997701 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 13 23:45:17.997959 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 13 23:45:17.998253 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 23:45:17.998527 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 13 23:45:17.998781 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 13 23:45:17.999034 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 13 23:45:17.999327 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 13 23:45:17.999607 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 23:45:17.999839 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 13 23:45:18.000106 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 23:45:18.000343 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 13 23:45:18.000368 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 23:45:18.000388 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 23:45:18.000408 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 23:45:18.000427 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 23:45:18.000446 kernel: iommu: Default domain type: Translated Jan 13 23:45:18.000472 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 23:45:18.000491 kernel: efivars: Registered efivars operations Jan 13 23:45:18.000510 kernel: vgaarb: loaded Jan 13 23:45:18.000530 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 23:45:18.000549 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 23:45:18.000568 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 23:45:18.000588 kernel: pnp: PnP ACPI init Jan 13 23:45:18.000861 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 13 23:45:18.000888 kernel: pnp: PnP ACPI: found 1 devices Jan 13 23:45:18.000907 kernel: NET: Registered PF_INET protocol family Jan 13 23:45:18.000926 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 23:45:18.000945 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 23:45:18.000965 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 23:45:18.000985 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 23:45:18.001010 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 23:45:18.001029 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 23:45:18.001048 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 23:45:18.001099 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 23:45:18.001121 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 23:45:18.001141 kernel: PCI: CLS 0 bytes, default 64 Jan 13 23:45:18.001160 kernel: kvm [1]: HYP mode not available Jan 13 23:45:18.001185 kernel: Initialise system trusted keyrings Jan 13 23:45:18.001204 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 23:45:18.001223 kernel: Key type asymmetric registered Jan 13 23:45:18.001243 kernel: Asymmetric key parser 'x509' registered Jan 13 23:45:18.001262 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 13 23:45:18.001281 kernel: io scheduler mq-deadline registered Jan 13 23:45:18.001300 kernel: io scheduler kyber registered Jan 13 23:45:18.001324 kernel: io scheduler bfq registered Jan 13 23:45:18.001603 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 13 23:45:18.001630 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 23:45:18.001650 kernel: ACPI: button: Power Button [PWRB] Jan 13 23:45:18.001669 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 13 23:45:18.001689 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 23:45:18.001713 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 23:45:18.001733 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 23:45:18.001985 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 13 23:45:18.002011 kernel: printk: legacy console [ttyS0] disabled Jan 13 23:45:18.002030 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 13 23:45:18.002049 kernel: printk: legacy console [ttyS0] enabled Jan 13 23:45:18.002089 kernel: printk: legacy bootconsole [uart0] disabled Jan 13 23:45:18.002116 kernel: thunder_xcv, ver 1.0 Jan 13 23:45:18.002136 kernel: thunder_bgx, ver 1.0 Jan 13 23:45:18.002155 kernel: nicpf, ver 1.0 Jan 13 23:45:18.002174 kernel: nicvf, ver 1.0 Jan 13 23:45:18.002447 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 23:45:18.002688 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-13T23:45:14 UTC (1768347914) Jan 13 23:45:18.002713 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 23:45:18.002738 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 13 23:45:18.002757 kernel: NET: Registered PF_INET6 protocol family Jan 13 23:45:18.002776 kernel: watchdog: NMI not fully supported Jan 13 23:45:18.002795 kernel: watchdog: Hard watchdog permanently disabled Jan 13 23:45:18.002813 kernel: Segment Routing with IPv6 Jan 13 23:45:18.002832 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 23:45:18.002851 kernel: NET: Registered PF_PACKET protocol family Jan 13 23:45:18.002874 kernel: Key type dns_resolver registered Jan 13 23:45:18.002892 kernel: registered taskstats version 1 Jan 13 23:45:18.002911 kernel: Loading compiled-in X.509 certificates Jan 13 23:45:18.002930 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: d16d100cda59d8093883df975a5384fda36b7d35' Jan 13 23:45:18.002949 kernel: Demotion targets for Node 0: null Jan 13 23:45:18.002967 kernel: Key type .fscrypt registered Jan 13 23:45:18.002986 kernel: Key type fscrypt-provisioning registered Jan 13 23:45:18.003009 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 23:45:18.003029 kernel: ima: Allocated hash algorithm: sha1 Jan 13 23:45:18.003048 kernel: ima: No architecture policies found Jan 13 23:45:18.003086 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 23:45:18.003108 kernel: clk: Disabling unused clocks Jan 13 23:45:18.003127 kernel: PM: genpd: Disabling unused power domains Jan 13 23:45:18.003146 kernel: Freeing unused kernel memory: 12480K Jan 13 23:45:18.003165 kernel: Run /init as init process Jan 13 23:45:18.003189 kernel: with arguments: Jan 13 23:45:18.003208 kernel: /init Jan 13 23:45:18.003226 kernel: with environment: Jan 13 23:45:18.003244 kernel: HOME=/ Jan 13 23:45:18.003263 kernel: TERM=linux Jan 13 23:45:18.003283 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 23:45:18.003514 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 23:45:18.003721 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 23:45:18.003748 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 23:45:18.003767 kernel: GPT:25804799 != 33554431 Jan 13 23:45:18.003786 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 23:45:18.003804 kernel: GPT:25804799 != 33554431 Jan 13 23:45:18.003823 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 23:45:18.003847 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 23:45:18.003866 kernel: SCSI subsystem initialized Jan 13 23:45:18.003885 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 23:45:18.003904 kernel: device-mapper: uevent: version 1.0.3 Jan 13 23:45:18.003923 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 13 23:45:18.003942 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 13 23:45:18.003961 kernel: raid6: neonx8 gen() 6555 MB/s Jan 13 23:45:18.003984 kernel: raid6: neonx4 gen() 6521 MB/s Jan 13 23:45:18.004003 kernel: raid6: neonx2 gen() 5425 MB/s Jan 13 23:45:18.004022 kernel: raid6: neonx1 gen() 3922 MB/s Jan 13 23:45:18.004041 kernel: raid6: int64x8 gen() 3638 MB/s Jan 13 23:45:18.004088 kernel: raid6: int64x4 gen() 3676 MB/s Jan 13 23:45:18.004113 kernel: raid6: int64x2 gen() 3562 MB/s Jan 13 23:45:18.004132 kernel: raid6: int64x1 gen() 2724 MB/s Jan 13 23:45:18.004156 kernel: raid6: using algorithm neonx8 gen() 6555 MB/s Jan 13 23:45:18.004176 kernel: raid6: .... xor() 4738 MB/s, rmw enabled Jan 13 23:45:18.004194 kernel: raid6: using neon recovery algorithm Jan 13 23:45:18.004213 kernel: xor: measuring software checksum speed Jan 13 23:45:18.004232 kernel: 8regs : 12932 MB/sec Jan 13 23:45:18.004251 kernel: 32regs : 13013 MB/sec Jan 13 23:45:18.004269 kernel: arm64_neon : 8839 MB/sec Jan 13 23:45:18.004292 kernel: xor: using function: 32regs (13013 MB/sec) Jan 13 23:45:18.004311 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 23:45:18.004331 kernel: BTRFS: device fsid 68b1ce8e-a637-4e91-acf8-5a2e05e289e5 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (222) Jan 13 23:45:18.004351 kernel: BTRFS info (device dm-0): first mount of filesystem 68b1ce8e-a637-4e91-acf8-5a2e05e289e5 Jan 13 23:45:18.004370 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:45:18.004389 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 23:45:18.004409 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 23:45:18.004431 kernel: BTRFS info (device dm-0): enabling free space tree Jan 13 23:45:18.004450 kernel: loop: module loaded Jan 13 23:45:18.004469 kernel: loop0: detected capacity change from 0 to 91832 Jan 13 23:45:18.004488 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 23:45:18.004510 systemd[1]: Successfully made /usr/ read-only. Jan 13 23:45:18.004535 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 13 23:45:18.004561 systemd[1]: Detected virtualization amazon. Jan 13 23:45:18.004582 systemd[1]: Detected architecture arm64. Jan 13 23:45:18.004602 systemd[1]: Running in initrd. Jan 13 23:45:18.004622 systemd[1]: No hostname configured, using default hostname. Jan 13 23:45:18.004643 systemd[1]: Hostname set to . Jan 13 23:45:18.004663 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 13 23:45:18.004683 systemd[1]: Queued start job for default target initrd.target. Jan 13 23:45:18.004708 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 13 23:45:18.004728 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 23:45:18.004748 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 23:45:18.004770 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 23:45:18.004791 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 23:45:18.004832 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 23:45:18.004854 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 23:45:18.004875 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 23:45:18.004897 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 23:45:18.004918 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 13 23:45:18.004943 systemd[1]: Reached target paths.target - Path Units. Jan 13 23:45:18.004964 systemd[1]: Reached target slices.target - Slice Units. Jan 13 23:45:18.004985 systemd[1]: Reached target swap.target - Swaps. Jan 13 23:45:18.005006 systemd[1]: Reached target timers.target - Timer Units. Jan 13 23:45:18.005026 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 23:45:18.005047 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 23:45:18.005087 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 13 23:45:18.005117 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 23:45:18.005138 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 13 23:45:18.005159 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 23:45:18.005180 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 23:45:18.005201 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 23:45:18.005222 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 23:45:18.005244 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 23:45:18.005270 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 23:45:18.005291 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 23:45:18.005312 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 23:45:18.005334 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 13 23:45:18.005355 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 23:45:18.005376 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 23:45:18.005397 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 23:45:18.005424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:45:18.005446 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 23:45:18.005471 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 23:45:18.005493 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 23:45:18.005514 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 23:45:18.005578 systemd-journald[359]: Collecting audit messages is enabled. Jan 13 23:45:18.005625 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 23:45:18.005646 systemd-journald[359]: Journal started Jan 13 23:45:18.005682 systemd-journald[359]: Runtime Journal (/run/log/journal/ec2ef356d117b6275a2a3a3c7091c43c) is 8M, max 75.3M, 67.3M free. Jan 13 23:45:18.016007 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 23:45:18.016109 kernel: audit: type=1130 audit(1768347918.006:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.037354 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 23:45:18.053270 kernel: Bridge firewalling registered Jan 13 23:45:18.053314 kernel: audit: type=1130 audit(1768347918.044:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.040713 systemd-modules-load[361]: Inserted module 'br_netfilter' Jan 13 23:45:18.045883 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 23:45:18.060048 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 23:45:18.073268 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 23:45:18.088213 kernel: audit: type=1130 audit(1768347918.072:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.091870 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 23:45:18.105333 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:45:18.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.124137 kernel: audit: type=1130 audit(1768347918.111:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.125335 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 23:45:18.129049 systemd-tmpfiles[372]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 13 23:45:18.150257 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 23:45:18.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.165000 audit: BPF prog-id=6 op=LOAD Jan 13 23:45:18.168133 kernel: audit: type=1130 audit(1768347918.159:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.168528 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 23:45:18.174598 kernel: audit: type=1334 audit(1768347918.165:7): prog-id=6 op=LOAD Jan 13 23:45:18.178301 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 23:45:18.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.192222 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 23:45:18.195138 kernel: audit: type=1130 audit(1768347918.182:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.195185 kernel: audit: type=1130 audit(1768347918.194:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.209365 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 23:45:18.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.222094 kernel: audit: type=1130 audit(1768347918.214:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.222881 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 23:45:18.269566 dracut-cmdline[401]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=3d3f73de8d2693594dfefd279d2c8d77c282a05a4cbc54177503d31784261f6b Jan 13 23:45:18.353255 systemd-resolved[389]: Positive Trust Anchors: Jan 13 23:45:18.353292 systemd-resolved[389]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 23:45:18.353301 systemd-resolved[389]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 13 23:45:18.353363 systemd-resolved[389]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 23:45:18.613117 kernel: Loading iSCSI transport class v2.0-870. Jan 13 23:45:18.623112 kernel: random: crng init done Jan 13 23:45:18.648778 systemd-resolved[389]: Defaulting to hostname 'linux'. Jan 13 23:45:18.652805 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 23:45:18.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.658336 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 23:45:18.670080 kernel: audit: type=1130 audit(1768347918.657:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.676112 kernel: iscsi: registered transport (tcp) Jan 13 23:45:18.726504 kernel: iscsi: registered transport (qla4xxx) Jan 13 23:45:18.726581 kernel: QLogic iSCSI HBA Driver Jan 13 23:45:18.766645 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 23:45:18.801363 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 23:45:18.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.811560 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 23:45:18.889741 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 23:45:18.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.896659 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 23:45:18.901641 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 23:45:18.969130 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 23:45:18.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:18.974000 audit: BPF prog-id=7 op=LOAD Jan 13 23:45:18.974000 audit: BPF prog-id=8 op=LOAD Jan 13 23:45:18.977299 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 23:45:19.038506 systemd-udevd[644]: Using default interface naming scheme 'v257'. Jan 13 23:45:19.060266 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 23:45:19.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:19.069035 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 23:45:19.121649 dracut-pre-trigger[709]: rd.md=0: removing MD RAID activation Jan 13 23:45:19.124698 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 23:45:19.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:19.131000 audit: BPF prog-id=9 op=LOAD Jan 13 23:45:19.136014 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 23:45:19.187209 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 23:45:19.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:19.194319 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 23:45:19.236799 systemd-networkd[751]: lo: Link UP Jan 13 23:45:19.236819 systemd-networkd[751]: lo: Gained carrier Jan 13 23:45:19.239184 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 23:45:19.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:19.249315 systemd[1]: Reached target network.target - Network. Jan 13 23:45:19.357184 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 23:45:19.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:19.364862 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 23:45:19.556767 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 23:45:19.557619 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:45:19.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:19.566560 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:45:19.572782 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:45:19.630411 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:45:19.635664 kernel: nvme nvme0: using unchecked data buffer Jan 13 23:45:19.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:19.643174 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 23:45:19.643243 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 13 23:45:19.653737 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 23:45:19.654130 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 23:45:19.663100 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:70:f8:42:2a:cf Jan 13 23:45:19.664480 (udev-worker)[786]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:45:19.678257 systemd-networkd[751]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:45:19.678280 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 23:45:19.693179 systemd-networkd[751]: eth0: Link UP Jan 13 23:45:19.700247 systemd-networkd[751]: eth0: Gained carrier Jan 13 23:45:19.700288 systemd-networkd[751]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:45:19.716624 systemd-networkd[751]: eth0: DHCPv4 address 172.31.28.147/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 23:45:19.789531 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 23:45:19.864611 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 23:45:19.894505 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 23:45:19.924693 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 23:45:19.950299 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 23:45:19.956230 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 23:45:19.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:19.962229 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 23:45:19.965020 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 23:45:19.973703 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 23:45:19.983955 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 23:45:20.007164 disk-uuid[909]: Primary Header is updated. Jan 13 23:45:20.007164 disk-uuid[909]: Secondary Entries is updated. Jan 13 23:45:20.007164 disk-uuid[909]: Secondary Header is updated. Jan 13 23:45:20.069993 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 23:45:20.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:21.111813 disk-uuid[910]: Warning: The kernel is still using the old partition table. Jan 13 23:45:21.111813 disk-uuid[910]: The new table will be used at the next reboot or after you Jan 13 23:45:21.111813 disk-uuid[910]: run partprobe(8) or kpartx(8) Jan 13 23:45:21.111813 disk-uuid[910]: The operation has completed successfully. Jan 13 23:45:21.131159 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 23:45:21.132024 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 23:45:21.142051 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 23:45:21.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:21.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:21.205120 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1094) Jan 13 23:45:21.210418 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 13 23:45:21.210471 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:45:21.244814 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 23:45:21.244889 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 13 23:45:21.255141 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 13 23:45:21.256388 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 23:45:21.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:21.263198 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 23:45:21.370208 systemd-networkd[751]: eth0: Gained IPv6LL Jan 13 23:45:22.269921 ignition[1113]: Ignition 2.24.0 Jan 13 23:45:22.269953 ignition[1113]: Stage: fetch-offline Jan 13 23:45:22.270397 ignition[1113]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:45:22.270425 ignition[1113]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:45:22.278077 ignition[1113]: Ignition finished successfully Jan 13 23:45:22.281603 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 23:45:22.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:22.289260 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 23:45:22.328594 ignition[1120]: Ignition 2.24.0 Jan 13 23:45:22.330370 ignition[1120]: Stage: fetch Jan 13 23:45:22.330744 ignition[1120]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:45:22.330773 ignition[1120]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:45:22.330898 ignition[1120]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:45:22.359203 ignition[1120]: PUT result: OK Jan 13 23:45:22.362738 ignition[1120]: parsed url from cmdline: "" Jan 13 23:45:22.362761 ignition[1120]: no config URL provided Jan 13 23:45:22.362783 ignition[1120]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 23:45:22.362816 ignition[1120]: no config at "/usr/lib/ignition/user.ign" Jan 13 23:45:22.362859 ignition[1120]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:45:22.365481 ignition[1120]: PUT result: OK Jan 13 23:45:22.365622 ignition[1120]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 23:45:22.373329 ignition[1120]: GET result: OK Jan 13 23:45:22.373785 ignition[1120]: parsing config with SHA512: d701251f85fda93219f5ec37a12642272c1162262f69131ce7f8281f59bcf358f91f80a7eb2493f037f18c04b4e20eef285212493adad12b635c4148495c137a Jan 13 23:45:22.388252 unknown[1120]: fetched base config from "system" Jan 13 23:45:22.388759 unknown[1120]: fetched base config from "system" Jan 13 23:45:22.389709 ignition[1120]: fetch: fetch complete Jan 13 23:45:22.388773 unknown[1120]: fetched user config from "aws" Jan 13 23:45:22.389721 ignition[1120]: fetch: fetch passed Jan 13 23:45:22.389813 ignition[1120]: Ignition finished successfully Jan 13 23:45:22.402818 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 23:45:22.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:22.409140 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 23:45:22.453377 ignition[1126]: Ignition 2.24.0 Jan 13 23:45:22.453410 ignition[1126]: Stage: kargs Jan 13 23:45:22.453786 ignition[1126]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:45:22.453808 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:45:22.454106 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:45:22.456913 ignition[1126]: PUT result: OK Jan 13 23:45:22.468038 ignition[1126]: kargs: kargs passed Jan 13 23:45:22.469893 ignition[1126]: Ignition finished successfully Jan 13 23:45:22.475152 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 23:45:22.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:22.481949 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 23:45:22.529755 ignition[1132]: Ignition 2.24.0 Jan 13 23:45:22.530338 ignition[1132]: Stage: disks Jan 13 23:45:22.530752 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:45:22.530774 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:45:22.530915 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:45:22.537670 ignition[1132]: PUT result: OK Jan 13 23:45:22.546193 ignition[1132]: disks: disks passed Jan 13 23:45:22.546345 ignition[1132]: Ignition finished successfully Jan 13 23:45:22.550928 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 23:45:22.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:22.555863 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 23:45:22.561192 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 23:45:22.566824 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 23:45:22.571649 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 23:45:22.573990 systemd[1]: Reached target basic.target - Basic System. Jan 13 23:45:22.582156 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 23:45:22.654245 systemd-fsck[1140]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 13 23:45:22.660144 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 23:45:22.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:22.668453 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 23:45:22.947112 kernel: EXT4-fs (nvme0n1p9): mounted filesystem db887ae3-d64c-46de-9f1e-de51a801ae44 r/w with ordered data mode. Quota mode: none. Jan 13 23:45:22.948704 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 23:45:22.952666 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 23:45:22.961236 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 23:45:22.965190 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 23:45:22.976539 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 23:45:22.976644 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 23:45:22.976704 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 23:45:23.001258 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 23:45:23.003729 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 23:45:23.040123 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1159) Jan 13 23:45:23.045595 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 13 23:45:23.045662 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:45:23.057227 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 23:45:23.057305 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 13 23:45:23.059915 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 23:45:25.220681 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 23:45:25.239591 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 13 23:45:25.239643 kernel: audit: type=1130 audit(1768347925.222:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:25.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:25.227208 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 23:45:25.236281 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 23:45:25.267322 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 23:45:25.272094 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 13 23:45:25.310578 ignition[1255]: INFO : Ignition 2.24.0 Jan 13 23:45:25.313241 ignition[1255]: INFO : Stage: mount Jan 13 23:45:25.313241 ignition[1255]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 23:45:25.313241 ignition[1255]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:45:25.320243 ignition[1255]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:45:25.323343 ignition[1255]: INFO : PUT result: OK Jan 13 23:45:25.328137 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 23:45:25.336188 ignition[1255]: INFO : mount: mount passed Jan 13 23:45:25.336188 ignition[1255]: INFO : Ignition finished successfully Jan 13 23:45:25.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:25.341905 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 23:45:25.346104 kernel: audit: type=1130 audit(1768347925.332:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:25.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:25.350963 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 23:45:25.361708 kernel: audit: type=1130 audit(1768347925.348:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:25.382365 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 23:45:25.446115 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1268) Jan 13 23:45:25.451039 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 13 23:45:25.451113 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:45:25.461841 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 23:45:25.461916 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 13 23:45:25.464982 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 23:45:25.511790 ignition[1285]: INFO : Ignition 2.24.0 Jan 13 23:45:25.511790 ignition[1285]: INFO : Stage: files Jan 13 23:45:25.517410 ignition[1285]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 23:45:25.517410 ignition[1285]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:45:25.517410 ignition[1285]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:45:25.529313 ignition[1285]: INFO : PUT result: OK Jan 13 23:45:25.532713 ignition[1285]: DEBUG : files: compiled without relabeling support, skipping Jan 13 23:45:25.541689 ignition[1285]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 23:45:25.541689 ignition[1285]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 23:45:25.551121 ignition[1285]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 23:45:25.554341 ignition[1285]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 23:45:25.557637 unknown[1285]: wrote ssh authorized keys file for user: core Jan 13 23:45:25.560118 ignition[1285]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 23:45:25.635650 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 13 23:45:25.640051 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 13 23:45:25.729252 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 23:45:25.967329 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 13 23:45:25.973152 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 23:45:25.973152 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 23:45:25.973152 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 23:45:25.973152 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 23:45:25.973152 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 23:45:25.973152 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 23:45:25.973152 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 23:45:25.973152 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 23:45:26.005999 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 23:45:26.005999 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 23:45:26.005999 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 13 23:45:26.005999 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 13 23:45:26.005999 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 13 23:45:26.005999 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 13 23:45:26.472449 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 23:45:26.885901 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 13 23:45:26.891093 ignition[1285]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 23:45:26.891093 ignition[1285]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 23:45:26.898195 ignition[1285]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 23:45:26.898195 ignition[1285]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 23:45:26.898195 ignition[1285]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 23:45:26.898195 ignition[1285]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 23:45:26.898195 ignition[1285]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 23:45:26.898195 ignition[1285]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 23:45:26.898195 ignition[1285]: INFO : files: files passed Jan 13 23:45:26.898195 ignition[1285]: INFO : Ignition finished successfully Jan 13 23:45:26.933821 kernel: audit: type=1130 audit(1768347926.923:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:26.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:26.921707 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 23:45:26.937051 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 23:45:26.941514 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 23:45:26.968482 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 23:45:26.970734 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 23:45:26.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:26.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:26.988223 kernel: audit: type=1130 audit(1768347926.976:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:26.988294 kernel: audit: type=1131 audit(1768347926.976:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:26.996112 initrd-setup-root-after-ignition[1317]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 23:45:26.996112 initrd-setup-root-after-ignition[1317]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 23:45:27.005375 initrd-setup-root-after-ignition[1321]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 23:45:27.011898 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 23:45:27.017026 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 23:45:27.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.026452 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 23:45:27.035481 kernel: audit: type=1130 audit(1768347927.015:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.128984 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 23:45:27.131156 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 23:45:27.153008 kernel: audit: type=1130 audit(1768347927.134:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.153056 kernel: audit: type=1131 audit(1768347927.134:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.135586 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 23:45:27.138246 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 23:45:27.151093 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 23:45:27.155886 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 23:45:27.217693 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 23:45:27.225993 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 23:45:27.239842 kernel: audit: type=1130 audit(1768347927.222:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.265193 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 13 23:45:27.267675 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 23:45:27.273923 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 23:45:27.277782 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 23:45:27.281956 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 23:45:27.282324 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 23:45:27.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.292626 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 23:45:27.296000 systemd[1]: Stopped target basic.target - Basic System. Jan 13 23:45:27.299800 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 23:45:27.306570 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 23:45:27.309658 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 23:45:27.313230 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 13 23:45:27.321034 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 23:45:27.330224 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 23:45:27.333443 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 23:45:27.341172 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 23:45:27.343834 systemd[1]: Stopped target swap.target - Swaps. Jan 13 23:45:27.349982 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 23:45:27.351189 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 23:45:27.357651 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 23:45:27.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.362784 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 23:45:27.366210 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 23:45:27.371504 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 23:45:27.374972 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 23:45:27.375250 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 23:45:27.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.385620 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 23:45:27.386134 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 23:45:27.394737 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 23:45:27.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.394957 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 23:45:27.400689 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 23:45:27.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.412004 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 23:45:27.416437 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 23:45:27.416982 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 23:45:27.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.430197 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 23:45:27.430498 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 23:45:27.443624 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 23:45:27.451729 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 23:45:27.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.461797 ignition[1341]: INFO : Ignition 2.24.0 Jan 13 23:45:27.461797 ignition[1341]: INFO : Stage: umount Jan 13 23:45:27.461797 ignition[1341]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 23:45:27.461797 ignition[1341]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:45:27.461797 ignition[1341]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:45:27.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.475899 ignition[1341]: INFO : PUT result: OK Jan 13 23:45:27.489280 ignition[1341]: INFO : umount: umount passed Jan 13 23:45:27.492807 ignition[1341]: INFO : Ignition finished successfully Jan 13 23:45:27.499024 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 23:45:27.499269 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 23:45:27.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.513934 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 23:45:27.516345 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 23:45:27.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.523185 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 23:45:27.523541 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 23:45:27.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.533407 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 23:45:27.533529 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 23:45:27.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.540534 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 23:45:27.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.543483 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 23:45:27.546207 systemd[1]: Stopped target network.target - Network. Jan 13 23:45:27.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.550447 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 23:45:27.550572 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 23:45:27.559666 systemd[1]: Stopped target paths.target - Path Units. Jan 13 23:45:27.563826 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 23:45:27.578189 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 23:45:27.581290 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 23:45:27.586785 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 23:45:27.589350 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 23:45:27.589430 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 23:45:27.595916 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 23:45:27.595996 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 23:45:27.605752 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 13 23:45:27.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.605828 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 13 23:45:27.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.608602 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 23:45:27.608728 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 23:45:27.612009 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 23:45:27.612126 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 23:45:27.616774 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 23:45:27.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.623820 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 23:45:27.628193 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 23:45:27.629763 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 23:45:27.632243 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 23:45:27.663107 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 23:45:27.667209 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 23:45:27.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.673972 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 23:45:27.674549 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 23:45:27.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.685000 audit: BPF prog-id=9 op=UNLOAD Jan 13 23:45:27.685000 audit: BPF prog-id=6 op=UNLOAD Jan 13 23:45:27.687242 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 13 23:45:27.692684 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 23:45:27.692771 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 23:45:27.695737 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 23:45:27.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.695846 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 23:45:27.707931 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 23:45:27.711970 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 23:45:27.712121 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 23:45:27.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.724482 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 23:45:27.725467 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 23:45:27.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.731638 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 23:45:27.731739 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 23:45:27.738558 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 23:45:27.770021 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 23:45:27.777331 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 23:45:27.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.784214 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 23:45:27.784324 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 23:45:27.787322 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 23:45:27.794476 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 23:45:27.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.798237 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 23:45:27.798370 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 23:45:27.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.802283 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 23:45:27.802403 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 23:45:27.814462 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 23:45:27.814568 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 23:45:27.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.826556 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 23:45:27.829572 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 13 23:45:27.829691 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 23:45:27.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.843864 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 23:45:27.844512 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 23:45:27.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.852439 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 23:45:27.853014 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 23:45:27.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.858753 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 23:45:27.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.858854 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 23:45:27.866205 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 23:45:27.866326 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:45:27.871964 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 23:45:27.875231 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 23:45:27.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.890868 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 23:45:27.891105 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 23:45:27.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:27.899935 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 23:45:27.907010 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 23:45:27.941637 systemd[1]: Switching root. Jan 13 23:45:28.015937 systemd-journald[359]: Journal stopped Jan 13 23:45:31.848499 systemd-journald[359]: Received SIGTERM from PID 1 (systemd). Jan 13 23:45:31.848611 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 23:45:31.848654 kernel: SELinux: policy capability open_perms=1 Jan 13 23:45:31.848694 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 23:45:31.848727 kernel: SELinux: policy capability always_check_network=0 Jan 13 23:45:31.848763 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 23:45:31.848802 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 23:45:31.848833 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 23:45:31.848864 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 23:45:31.848896 kernel: SELinux: policy capability userspace_initial_context=0 Jan 13 23:45:31.848929 systemd[1]: Successfully loaded SELinux policy in 120.135ms. Jan 13 23:45:31.848982 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.930ms. Jan 13 23:45:31.849019 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 13 23:45:31.849054 systemd[1]: Detected virtualization amazon. Jan 13 23:45:31.849117 systemd[1]: Detected architecture arm64. Jan 13 23:45:31.849151 systemd[1]: Detected first boot. Jan 13 23:45:31.849186 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 13 23:45:31.849220 zram_generator::config[1384]: No configuration found. Jan 13 23:45:31.849258 kernel: NET: Registered PF_VSOCK protocol family Jan 13 23:45:31.849290 systemd[1]: Populated /etc with preset unit settings. Jan 13 23:45:31.849328 kernel: kauditd_printk_skb: 44 callbacks suppressed Jan 13 23:45:31.849359 kernel: audit: type=1334 audit(1768347931.153:89): prog-id=12 op=LOAD Jan 13 23:45:31.849392 kernel: audit: type=1334 audit(1768347931.156:90): prog-id=3 op=UNLOAD Jan 13 23:45:31.849421 kernel: audit: type=1334 audit(1768347931.156:91): prog-id=13 op=LOAD Jan 13 23:45:31.849452 kernel: audit: type=1334 audit(1768347931.156:92): prog-id=14 op=LOAD Jan 13 23:45:31.849484 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 23:45:31.849521 kernel: audit: type=1334 audit(1768347931.156:93): prog-id=4 op=UNLOAD Jan 13 23:45:31.849554 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 23:45:31.849587 kernel: audit: type=1334 audit(1768347931.156:94): prog-id=5 op=UNLOAD Jan 13 23:45:31.849615 kernel: audit: type=1131 audit(1768347931.160:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.849643 kernel: audit: type=1334 audit(1768347931.171:96): prog-id=12 op=UNLOAD Jan 13 23:45:31.849683 kernel: audit: type=1130 audit(1768347931.181:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.849715 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 23:45:31.849753 kernel: audit: type=1131 audit(1768347931.181:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.849786 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 23:45:31.849820 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 23:45:31.849850 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 23:45:31.849891 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 23:45:31.849921 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 23:45:31.849956 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 23:45:31.849989 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 23:45:31.850019 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 23:45:31.850051 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 23:45:31.850134 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 23:45:31.850167 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 23:45:31.850200 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 23:45:31.850234 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 23:45:31.850264 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 23:45:31.850293 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 23:45:31.850324 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 23:45:31.850357 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 23:45:31.850388 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 23:45:31.850417 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 23:45:31.850446 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 23:45:31.850477 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 23:45:31.850507 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 23:45:31.850539 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 23:45:31.850573 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 13 23:45:31.850606 systemd[1]: Reached target slices.target - Slice Units. Jan 13 23:45:31.850639 systemd[1]: Reached target swap.target - Swaps. Jan 13 23:45:31.850668 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 23:45:31.850697 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 23:45:31.850725 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 13 23:45:31.850754 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 13 23:45:31.850783 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 13 23:45:31.850816 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 23:45:31.850847 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 13 23:45:31.850879 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 13 23:45:31.850910 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 23:45:31.850939 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 23:45:31.850968 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 23:45:31.850996 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 23:45:31.851033 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 23:45:31.851107 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 23:45:31.851141 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 23:45:31.851171 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 23:45:31.851200 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 23:45:31.851233 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 23:45:31.851268 systemd[1]: Reached target machines.target - Containers. Jan 13 23:45:31.851297 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 23:45:31.851327 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:45:31.851355 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 23:45:31.851407 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 23:45:31.851440 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 23:45:31.851469 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 23:45:31.851503 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 23:45:31.851532 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 23:45:31.851562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 23:45:31.851592 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 23:45:31.851622 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 23:45:31.851656 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 23:45:31.851685 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 23:45:31.851718 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 23:45:31.851751 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:45:31.851781 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 23:45:31.851810 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 23:45:31.851844 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 23:45:31.851873 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 23:45:31.851907 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 13 23:45:31.851936 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 23:45:31.851968 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 23:45:31.851997 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 23:45:31.852025 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 23:45:31.852078 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 23:45:31.852135 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 23:45:31.852165 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 23:45:31.852195 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 23:45:31.852224 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 23:45:31.852256 kernel: fuse: init (API version 7.41) Jan 13 23:45:31.852291 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 23:45:31.852321 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 23:45:31.852350 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 23:45:31.852380 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 23:45:31.852409 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 23:45:31.852443 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 23:45:31.852472 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 23:45:31.852503 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 23:45:31.852532 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 23:45:31.852561 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 23:45:31.852596 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 23:45:31.852625 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 23:45:31.852657 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 23:45:31.852690 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 13 23:45:31.852720 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 23:45:31.852749 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 23:45:31.852786 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 13 23:45:31.852815 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:45:31.852845 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:45:31.852919 systemd-journald[1460]: Collecting audit messages is enabled. Jan 13 23:45:31.852968 systemd-journald[1460]: Journal started Jan 13 23:45:31.853019 systemd-journald[1460]: Runtime Journal (/run/log/journal/ec2ef356d117b6275a2a3a3c7091c43c) is 8M, max 75.3M, 67.3M free. Jan 13 23:45:31.335000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 13 23:45:31.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.583000 audit: BPF prog-id=14 op=UNLOAD Jan 13 23:45:31.583000 audit: BPF prog-id=13 op=UNLOAD Jan 13 23:45:31.587000 audit: BPF prog-id=15 op=LOAD Jan 13 23:45:31.587000 audit: BPF prog-id=16 op=LOAD Jan 13 23:45:31.588000 audit: BPF prog-id=17 op=LOAD Jan 13 23:45:31.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.858356 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 23:45:31.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.842000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 13 23:45:31.842000 audit[1460]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffd4e5fd50 a2=4000 a3=0 items=0 ppid=1 pid=1460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:31.842000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 13 23:45:31.142600 systemd[1]: Queued start job for default target multi-user.target. Jan 13 23:45:31.863388 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 23:45:31.158681 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 23:45:31.161138 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 23:45:31.874083 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 23:45:31.884394 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 23:45:31.894116 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 23:45:31.901289 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 23:45:31.921402 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 23:45:31.921497 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 23:45:31.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.933093 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 13 23:45:31.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.968886 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 23:45:31.974650 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 23:45:31.988650 kernel: ACPI: bus type drm_connector registered Jan 13 23:45:31.980581 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 23:45:31.985387 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 13 23:45:31.989947 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 23:45:31.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:31.992383 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 23:45:32.051232 systemd-journald[1460]: Time spent on flushing to /var/log/journal/ec2ef356d117b6275a2a3a3c7091c43c is 111.745ms for 1054 entries. Jan 13 23:45:32.051232 systemd-journald[1460]: System Journal (/var/log/journal/ec2ef356d117b6275a2a3a3c7091c43c) is 8M, max 588.1M, 580.1M free. Jan 13 23:45:32.225484 systemd-journald[1460]: Received client request to flush runtime journal. Jan 13 23:45:32.225579 kernel: loop1: detected capacity change from 0 to 45344 Jan 13 23:45:32.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:32.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:32.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:32.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:32.088054 systemd-tmpfiles[1488]: ACLs are not supported, ignoring. Jan 13 23:45:32.088107 systemd-tmpfiles[1488]: ACLs are not supported, ignoring. Jan 13 23:45:32.116355 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 23:45:32.193351 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 23:45:32.201697 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 23:45:32.206570 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 23:45:32.214014 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 23:45:32.225415 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 23:45:32.234761 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 23:45:32.239720 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 23:45:32.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:32.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:32.243974 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 23:45:32.248017 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 13 23:45:32.267671 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 23:45:32.270744 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 23:45:32.337861 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 23:45:32.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:32.346000 audit: BPF prog-id=18 op=LOAD Jan 13 23:45:32.346000 audit: BPF prog-id=19 op=LOAD Jan 13 23:45:32.346000 audit: BPF prog-id=20 op=LOAD Jan 13 23:45:32.349044 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 13 23:45:32.354000 audit: BPF prog-id=21 op=LOAD Jan 13 23:45:32.357531 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 23:45:32.365427 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 23:45:32.384000 audit: BPF prog-id=22 op=LOAD Jan 13 23:45:32.385000 audit: BPF prog-id=23 op=LOAD Jan 13 23:45:32.396000 audit: BPF prog-id=24 op=LOAD Jan 13 23:45:32.399566 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 23:45:32.409000 audit: BPF prog-id=25 op=LOAD Jan 13 23:45:32.409000 audit: BPF prog-id=26 op=LOAD Jan 13 23:45:32.409000 audit: BPF prog-id=27 op=LOAD Jan 13 23:45:32.412886 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 13 23:45:32.446128 kernel: loop2: detected capacity change from 0 to 211168 Jan 13 23:45:32.461106 systemd-tmpfiles[1540]: ACLs are not supported, ignoring. Jan 13 23:45:32.461147 systemd-tmpfiles[1540]: ACLs are not supported, ignoring. Jan 13 23:45:32.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:32.476380 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 23:45:32.522995 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 23:45:32.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:32.559291 systemd-nsresourced[1543]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 13 23:45:32.564452 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 13 23:45:32.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:32.706743 systemd-oomd[1538]: No swap; memory pressure usage will be degraded Jan 13 23:45:32.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:32.707734 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 13 23:45:32.773103 kernel: loop3: detected capacity change from 0 to 61504 Jan 13 23:45:32.808942 systemd-resolved[1539]: Positive Trust Anchors: Jan 13 23:45:32.808978 systemd-resolved[1539]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 23:45:32.808988 systemd-resolved[1539]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 13 23:45:32.809050 systemd-resolved[1539]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 23:45:32.823866 systemd-resolved[1539]: Defaulting to hostname 'linux'. Jan 13 23:45:32.826242 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 23:45:32.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:32.829005 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 23:45:33.093146 kernel: loop4: detected capacity change from 0 to 100192 Jan 13 23:45:33.263172 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 23:45:33.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:33.265000 audit: BPF prog-id=8 op=UNLOAD Jan 13 23:45:33.265000 audit: BPF prog-id=7 op=UNLOAD Jan 13 23:45:33.266000 audit: BPF prog-id=28 op=LOAD Jan 13 23:45:33.266000 audit: BPF prog-id=29 op=LOAD Jan 13 23:45:33.269132 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 23:45:33.326558 systemd-udevd[1565]: Using default interface naming scheme 'v257'. Jan 13 23:45:33.392122 kernel: loop5: detected capacity change from 0 to 45344 Jan 13 23:45:33.421118 kernel: loop6: detected capacity change from 0 to 211168 Jan 13 23:45:33.460113 kernel: loop7: detected capacity change from 0 to 61504 Jan 13 23:45:33.492177 kernel: loop1: detected capacity change from 0 to 100192 Jan 13 23:45:33.499271 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 23:45:33.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:33.512000 audit: BPF prog-id=30 op=LOAD Jan 13 23:45:33.516342 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 23:45:33.518518 (sd-merge)[1567]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Jan 13 23:45:33.529639 (sd-merge)[1567]: Merged extensions into '/usr'. Jan 13 23:45:33.543475 systemd[1]: Reload requested from client PID 1487 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 23:45:33.543505 systemd[1]: Reloading... Jan 13 23:45:33.669700 (udev-worker)[1577]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:45:33.848182 zram_generator::config[1645]: No configuration found. Jan 13 23:45:34.418664 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 23:45:34.419858 systemd[1]: Reloading finished in 875 ms. Jan 13 23:45:34.451840 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 23:45:34.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:34.533536 systemd[1]: Starting ensure-sysext.service... Jan 13 23:45:34.542190 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 23:45:34.560351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:45:34.572000 audit: BPF prog-id=31 op=LOAD Jan 13 23:45:34.572000 audit: BPF prog-id=21 op=UNLOAD Jan 13 23:45:34.574000 audit: BPF prog-id=32 op=LOAD Jan 13 23:45:34.574000 audit: BPF prog-id=18 op=UNLOAD Jan 13 23:45:34.575000 audit: BPF prog-id=33 op=LOAD Jan 13 23:45:34.575000 audit: BPF prog-id=34 op=LOAD Jan 13 23:45:34.575000 audit: BPF prog-id=19 op=UNLOAD Jan 13 23:45:34.575000 audit: BPF prog-id=20 op=UNLOAD Jan 13 23:45:34.577000 audit: BPF prog-id=35 op=LOAD Jan 13 23:45:34.577000 audit: BPF prog-id=36 op=LOAD Jan 13 23:45:34.577000 audit: BPF prog-id=28 op=UNLOAD Jan 13 23:45:34.577000 audit: BPF prog-id=29 op=UNLOAD Jan 13 23:45:34.579000 audit: BPF prog-id=37 op=LOAD Jan 13 23:45:34.579000 audit: BPF prog-id=15 op=UNLOAD Jan 13 23:45:34.579000 audit: BPF prog-id=38 op=LOAD Jan 13 23:45:34.579000 audit: BPF prog-id=39 op=LOAD Jan 13 23:45:34.580000 audit: BPF prog-id=16 op=UNLOAD Jan 13 23:45:34.580000 audit: BPF prog-id=17 op=UNLOAD Jan 13 23:45:34.582000 audit: BPF prog-id=40 op=LOAD Jan 13 23:45:34.582000 audit: BPF prog-id=30 op=UNLOAD Jan 13 23:45:34.585000 audit: BPF prog-id=41 op=LOAD Jan 13 23:45:34.585000 audit: BPF prog-id=25 op=UNLOAD Jan 13 23:45:34.585000 audit: BPF prog-id=42 op=LOAD Jan 13 23:45:34.585000 audit: BPF prog-id=43 op=LOAD Jan 13 23:45:34.585000 audit: BPF prog-id=26 op=UNLOAD Jan 13 23:45:34.585000 audit: BPF prog-id=27 op=UNLOAD Jan 13 23:45:34.586000 audit: BPF prog-id=44 op=LOAD Jan 13 23:45:34.586000 audit: BPF prog-id=22 op=UNLOAD Jan 13 23:45:34.588000 audit: BPF prog-id=45 op=LOAD Jan 13 23:45:34.588000 audit: BPF prog-id=46 op=LOAD Jan 13 23:45:34.588000 audit: BPF prog-id=23 op=UNLOAD Jan 13 23:45:34.588000 audit: BPF prog-id=24 op=UNLOAD Jan 13 23:45:34.612384 systemd[1]: Reload requested from client PID 1776 ('systemctl') (unit ensure-sysext.service)... Jan 13 23:45:34.612419 systemd[1]: Reloading... Jan 13 23:45:34.635835 systemd-tmpfiles[1777]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 13 23:45:34.638029 systemd-tmpfiles[1777]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 13 23:45:34.639209 systemd-tmpfiles[1777]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 23:45:34.646720 systemd-tmpfiles[1777]: ACLs are not supported, ignoring. Jan 13 23:45:34.651212 systemd-tmpfiles[1777]: ACLs are not supported, ignoring. Jan 13 23:45:34.684258 systemd-tmpfiles[1777]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 23:45:34.685151 systemd-tmpfiles[1777]: Skipping /boot Jan 13 23:45:34.712034 systemd-tmpfiles[1777]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 23:45:34.712246 systemd-tmpfiles[1777]: Skipping /boot Jan 13 23:45:34.818186 zram_generator::config[1818]: No configuration found. Jan 13 23:45:34.879885 systemd-networkd[1574]: lo: Link UP Jan 13 23:45:34.880429 systemd-networkd[1574]: lo: Gained carrier Jan 13 23:45:34.883641 systemd-networkd[1574]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:45:34.883807 systemd-networkd[1574]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 23:45:34.888724 systemd-networkd[1574]: eth0: Link UP Jan 13 23:45:34.889311 systemd-networkd[1574]: eth0: Gained carrier Jan 13 23:45:34.889441 systemd-networkd[1574]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:45:34.902225 systemd-networkd[1574]: eth0: DHCPv4 address 172.31.28.147/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 23:45:35.289916 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 23:45:35.293731 systemd[1]: Reloading finished in 680 ms. Jan 13 23:45:35.314939 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 23:45:35.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.320000 audit: BPF prog-id=47 op=LOAD Jan 13 23:45:35.320000 audit: BPF prog-id=48 op=LOAD Jan 13 23:45:35.320000 audit: BPF prog-id=35 op=UNLOAD Jan 13 23:45:35.320000 audit: BPF prog-id=36 op=UNLOAD Jan 13 23:45:35.321000 audit: BPF prog-id=49 op=LOAD Jan 13 23:45:35.321000 audit: BPF prog-id=44 op=UNLOAD Jan 13 23:45:35.322000 audit: BPF prog-id=50 op=LOAD Jan 13 23:45:35.322000 audit: BPF prog-id=51 op=LOAD Jan 13 23:45:35.322000 audit: BPF prog-id=45 op=UNLOAD Jan 13 23:45:35.322000 audit: BPF prog-id=46 op=UNLOAD Jan 13 23:45:35.324000 audit: BPF prog-id=52 op=LOAD Jan 13 23:45:35.324000 audit: BPF prog-id=41 op=UNLOAD Jan 13 23:45:35.324000 audit: BPF prog-id=53 op=LOAD Jan 13 23:45:35.324000 audit: BPF prog-id=54 op=LOAD Jan 13 23:45:35.324000 audit: BPF prog-id=42 op=UNLOAD Jan 13 23:45:35.324000 audit: BPF prog-id=43 op=UNLOAD Jan 13 23:45:35.326000 audit: BPF prog-id=55 op=LOAD Jan 13 23:45:35.326000 audit: BPF prog-id=32 op=UNLOAD Jan 13 23:45:35.326000 audit: BPF prog-id=56 op=LOAD Jan 13 23:45:35.326000 audit: BPF prog-id=57 op=LOAD Jan 13 23:45:35.326000 audit: BPF prog-id=33 op=UNLOAD Jan 13 23:45:35.326000 audit: BPF prog-id=34 op=UNLOAD Jan 13 23:45:35.328000 audit: BPF prog-id=58 op=LOAD Jan 13 23:45:35.328000 audit: BPF prog-id=31 op=UNLOAD Jan 13 23:45:35.342000 audit: BPF prog-id=59 op=LOAD Jan 13 23:45:35.342000 audit: BPF prog-id=37 op=UNLOAD Jan 13 23:45:35.342000 audit: BPF prog-id=60 op=LOAD Jan 13 23:45:35.342000 audit: BPF prog-id=61 op=LOAD Jan 13 23:45:35.342000 audit: BPF prog-id=38 op=UNLOAD Jan 13 23:45:35.342000 audit: BPF prog-id=39 op=UNLOAD Jan 13 23:45:35.344000 audit: BPF prog-id=62 op=LOAD Jan 13 23:45:35.344000 audit: BPF prog-id=40 op=UNLOAD Jan 13 23:45:35.350890 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 23:45:35.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.357442 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:45:35.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.406816 systemd[1]: Reached target network.target - Network. Jan 13 23:45:35.413486 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 23:45:35.419777 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 23:45:35.426336 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:45:35.429714 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 23:45:35.435708 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 23:45:35.448423 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 23:45:35.451229 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:45:35.451651 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:45:35.455615 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 23:45:35.462754 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 23:45:35.467687 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:45:35.473707 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 23:45:35.480102 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 13 23:45:35.487540 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 23:45:35.499659 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 23:45:35.509199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 23:45:35.511825 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 23:45:35.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.520772 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 23:45:35.522287 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 23:45:35.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.526120 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 23:45:35.526631 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 23:45:35.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.544000 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:45:35.552524 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 23:45:35.558172 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 23:45:35.567834 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 23:45:35.570313 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:45:35.570656 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:45:35.570894 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:45:35.585412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:45:35.601696 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 23:45:35.604400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:45:35.604755 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:45:35.604981 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:45:35.605360 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 23:45:35.615221 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 23:45:35.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.630987 systemd[1]: Finished ensure-sysext.service. Jan 13 23:45:35.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.635000 audit[1880]: SYSTEM_BOOT pid=1880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.647425 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 23:45:35.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.679220 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 23:45:35.687999 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 23:45:35.688488 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 23:45:35.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.694724 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 23:45:35.699025 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 23:45:35.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.703171 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 23:45:35.711341 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 23:45:35.718566 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 23:45:35.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.722214 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 23:45:35.722661 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 23:45:35.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.728982 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 13 23:45:35.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:35.733601 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 23:45:35.766000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 13 23:45:35.766000 audit[1918]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcc86f160 a2=420 a3=0 items=0 ppid=1870 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:35.766000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 13 23:45:35.768277 augenrules[1918]: No rules Jan 13 23:45:35.770247 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 23:45:35.770795 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 23:45:35.809571 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 23:45:35.812959 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 23:45:36.538241 systemd-networkd[1574]: eth0: Gained IPv6LL Jan 13 23:45:36.544245 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 23:45:36.547947 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 23:45:38.471586 ldconfig[1875]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 23:45:38.480183 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 23:45:38.485883 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 23:45:38.516155 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 23:45:38.519357 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 23:45:38.522124 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 23:45:38.525086 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 23:45:38.528570 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 23:45:38.531424 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 23:45:38.534289 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 13 23:45:38.537249 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 13 23:45:38.539970 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 23:45:38.542904 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 23:45:38.542959 systemd[1]: Reached target paths.target - Path Units. Jan 13 23:45:38.545252 systemd[1]: Reached target timers.target - Timer Units. Jan 13 23:45:38.549955 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 23:45:38.555285 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 23:45:38.561630 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 13 23:45:38.565141 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 13 23:45:38.568173 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 13 23:45:38.578428 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 23:45:38.581386 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 13 23:45:38.585464 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 23:45:38.588307 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 23:45:38.590710 systemd[1]: Reached target basic.target - Basic System. Jan 13 23:45:38.593109 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 23:45:38.593277 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 23:45:38.595224 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 23:45:38.601370 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 23:45:38.612701 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 23:45:38.619407 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 23:45:38.624990 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 23:45:38.632605 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 23:45:38.635216 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 23:45:38.640438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:45:38.645724 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 23:45:38.657349 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 23:45:38.667632 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 23:45:38.675434 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 23:45:38.683393 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 23:45:38.690450 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 23:45:38.706548 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 23:45:38.718908 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 23:45:38.721341 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 23:45:38.722213 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 23:45:38.728500 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 23:45:38.737428 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 23:45:38.758909 jq[1935]: false Jan 13 23:45:38.753124 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 23:45:38.778720 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 23:45:38.794276 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 23:45:38.795563 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 23:45:38.796030 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 23:45:38.825912 jq[1947]: true Jan 13 23:45:38.923523 ntpd[1939]: ntpd 4.2.8p18@1.4062-o Tue Jan 13 21:30:35 UTC 2026 (1): Starting Jan 13 23:45:38.926687 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: ntpd 4.2.8p18@1.4062-o Tue Jan 13 21:30:35 UTC 2026 (1): Starting Jan 13 23:45:38.926687 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 23:45:38.926687 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: ---------------------------------------------------- Jan 13 23:45:38.926687 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: ntp-4 is maintained by Network Time Foundation, Jan 13 23:45:38.926687 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 23:45:38.926687 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: corporation. Support and training for ntp-4 are Jan 13 23:45:38.926687 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: available at https://www.nwtime.org/support Jan 13 23:45:38.926687 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: ---------------------------------------------------- Jan 13 23:45:38.923635 ntpd[1939]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 23:45:38.923653 ntpd[1939]: ---------------------------------------------------- Jan 13 23:45:38.923671 ntpd[1939]: ntp-4 is maintained by Network Time Foundation, Jan 13 23:45:38.923688 ntpd[1939]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 23:45:38.923705 ntpd[1939]: corporation. Support and training for ntp-4 are Jan 13 23:45:38.923722 ntpd[1939]: available at https://www.nwtime.org/support Jan 13 23:45:38.923739 ntpd[1939]: ---------------------------------------------------- Jan 13 23:45:38.932817 ntpd[1939]: proto: precision = 0.096 usec (-23) Jan 13 23:45:38.944158 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: proto: precision = 0.096 usec (-23) Jan 13 23:45:38.946176 ntpd[1939]: basedate set to 2026-01-01 Jan 13 23:45:38.948242 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: basedate set to 2026-01-01 Jan 13 23:45:38.948242 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: gps base set to 2026-01-04 (week 2400) Jan 13 23:45:38.948242 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 23:45:38.948242 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 23:45:38.948242 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 23:45:38.948242 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: Listen normally on 3 eth0 172.31.28.147:123 Jan 13 23:45:38.948242 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: Listen normally on 4 lo [::1]:123 Jan 13 23:45:38.948242 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: Listen normally on 5 eth0 [fe80::470:f8ff:fe42:2acf%2]:123 Jan 13 23:45:38.948242 ntpd[1939]: 13 Jan 23:45:38 ntpd[1939]: Listening on routing socket on fd #22 for interface updates Jan 13 23:45:38.946216 ntpd[1939]: gps base set to 2026-01-04 (week 2400) Jan 13 23:45:38.946405 ntpd[1939]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 23:45:38.946452 ntpd[1939]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 23:45:38.946774 ntpd[1939]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 23:45:38.946821 ntpd[1939]: Listen normally on 3 eth0 172.31.28.147:123 Jan 13 23:45:38.946868 ntpd[1939]: Listen normally on 4 lo [::1]:123 Jan 13 23:45:38.946918 ntpd[1939]: Listen normally on 5 eth0 [fe80::470:f8ff:fe42:2acf%2]:123 Jan 13 23:45:38.946963 ntpd[1939]: Listening on routing socket on fd #22 for interface updates Jan 13 23:45:38.957202 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 23:45:38.978815 jq[1969]: true Jan 13 23:45:38.981432 extend-filesystems[1936]: Found /dev/nvme0n1p6 Jan 13 23:45:38.990614 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 23:45:38.993443 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 23:45:39.019017 ntpd[1939]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 23:45:39.019726 ntpd[1939]: 13 Jan 23:45:39 ntpd[1939]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 23:45:39.019726 ntpd[1939]: 13 Jan 23:45:39 ntpd[1939]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 23:45:39.019095 ntpd[1939]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 23:45:39.047990 tar[1964]: linux-arm64/LICENSE Jan 13 23:45:39.047990 tar[1964]: linux-arm64/helm Jan 13 23:45:39.045648 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 23:45:39.045233 dbus-daemon[1933]: [system] SELinux support is enabled Jan 13 23:45:39.057486 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 23:45:39.057558 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 23:45:39.061147 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 23:45:39.061186 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 23:45:39.067149 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 23:45:39.073150 update_engine[1946]: I20260113 23:45:39.069973 1946 main.cc:92] Flatcar Update Engine starting Jan 13 23:45:39.073597 extend-filesystems[1936]: Found /dev/nvme0n1p9 Jan 13 23:45:39.079927 update_engine[1946]: I20260113 23:45:39.079694 1946 update_check_scheduler.cc:74] Next update check in 5m48s Jan 13 23:45:39.083293 dbus-daemon[1933]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1574 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 23:45:39.085302 extend-filesystems[1936]: Checking size of /dev/nvme0n1p9 Jan 13 23:45:39.087554 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 23:45:39.093922 systemd[1]: Started update-engine.service - Update Engine. Jan 13 23:45:39.112121 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 23:45:39.151556 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 23:45:39.223605 extend-filesystems[1936]: Resized partition /dev/nvme0n1p9 Jan 13 23:45:39.228590 coreos-metadata[1932]: Jan 13 23:45:39.227 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 23:45:39.232138 coreos-metadata[1932]: Jan 13 23:45:39.230 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 23:45:39.236733 coreos-metadata[1932]: Jan 13 23:45:39.236 INFO Fetch successful Jan 13 23:45:39.236733 coreos-metadata[1932]: Jan 13 23:45:39.236 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 23:45:39.236733 coreos-metadata[1932]: Jan 13 23:45:39.236 INFO Fetch successful Jan 13 23:45:39.237015 coreos-metadata[1932]: Jan 13 23:45:39.236 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 23:45:39.237015 coreos-metadata[1932]: Jan 13 23:45:39.236 INFO Fetch successful Jan 13 23:45:39.237015 coreos-metadata[1932]: Jan 13 23:45:39.236 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 23:45:39.239585 extend-filesystems[2025]: resize2fs 1.47.3 (8-Jul-2025) Jan 13 23:45:39.242702 coreos-metadata[1932]: Jan 13 23:45:39.239 INFO Fetch successful Jan 13 23:45:39.242702 coreos-metadata[1932]: Jan 13 23:45:39.239 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 23:45:39.242702 coreos-metadata[1932]: Jan 13 23:45:39.239 INFO Fetch failed with 404: resource not found Jan 13 23:45:39.242702 coreos-metadata[1932]: Jan 13 23:45:39.239 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 23:45:39.251668 coreos-metadata[1932]: Jan 13 23:45:39.247 INFO Fetch successful Jan 13 23:45:39.251668 coreos-metadata[1932]: Jan 13 23:45:39.247 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 23:45:39.254333 coreos-metadata[1932]: Jan 13 23:45:39.254 INFO Fetch successful Jan 13 23:45:39.254333 coreos-metadata[1932]: Jan 13 23:45:39.254 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 23:45:39.254529 coreos-metadata[1932]: Jan 13 23:45:39.254 INFO Fetch successful Jan 13 23:45:39.254529 coreos-metadata[1932]: Jan 13 23:45:39.254 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 23:45:39.254529 coreos-metadata[1932]: Jan 13 23:45:39.254 INFO Fetch successful Jan 13 23:45:39.254529 coreos-metadata[1932]: Jan 13 23:45:39.254 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 23:45:39.254529 coreos-metadata[1932]: Jan 13 23:45:39.254 INFO Fetch successful Jan 13 23:45:39.267121 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Jan 13 23:45:39.414212 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 23:45:39.417102 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 23:45:39.430099 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Jan 13 23:45:39.449151 bash[2022]: Updated "/home/core/.ssh/authorized_keys" Jan 13 23:45:39.450868 extend-filesystems[2025]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 23:45:39.450868 extend-filesystems[2025]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 13 23:45:39.450868 extend-filesystems[2025]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Jan 13 23:45:39.461977 extend-filesystems[1936]: Resized filesystem in /dev/nvme0n1p9 Jan 13 23:45:39.458015 systemd-logind[1945]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 23:45:39.458077 systemd-logind[1945]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 13 23:45:39.461481 systemd-logind[1945]: New seat seat0. Jan 13 23:45:39.464188 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 23:45:39.466527 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 23:45:39.480188 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 23:45:39.497426 systemd[1]: Starting sshkeys.service... Jan 13 23:45:39.499542 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 23:45:39.649463 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 23:45:39.662389 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 23:45:39.719747 amazon-ssm-agent[2004]: Initializing new seelog logger Jan 13 23:45:39.724382 amazon-ssm-agent[2004]: New Seelog Logger Creation Complete Jan 13 23:45:39.728292 amazon-ssm-agent[2004]: 2026/01/13 23:45:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:45:39.728292 amazon-ssm-agent[2004]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:45:39.728292 amazon-ssm-agent[2004]: 2026/01/13 23:45:39 processing appconfig overrides Jan 13 23:45:39.737096 amazon-ssm-agent[2004]: 2026/01/13 23:45:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:45:39.737096 amazon-ssm-agent[2004]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:45:39.737096 amazon-ssm-agent[2004]: 2026/01/13 23:45:39 processing appconfig overrides Jan 13 23:45:39.737096 amazon-ssm-agent[2004]: 2026/01/13 23:45:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:45:39.737096 amazon-ssm-agent[2004]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:45:39.737096 amazon-ssm-agent[2004]: 2026/01/13 23:45:39 processing appconfig overrides Jan 13 23:45:39.745719 amazon-ssm-agent[2004]: 2026-01-13 23:45:39.7358 INFO Proxy environment variables: Jan 13 23:45:39.767174 amazon-ssm-agent[2004]: 2026/01/13 23:45:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:45:39.767174 amazon-ssm-agent[2004]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:45:39.767174 amazon-ssm-agent[2004]: 2026/01/13 23:45:39 processing appconfig overrides Jan 13 23:45:39.814903 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 23:45:39.817925 dbus-daemon[1933]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 23:45:39.822209 dbus-daemon[1933]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2007 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 23:45:39.830303 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 23:45:39.875456 amazon-ssm-agent[2004]: 2026-01-13 23:45:39.7358 INFO https_proxy: Jan 13 23:45:39.955502 locksmithd[2010]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 23:45:39.972435 amazon-ssm-agent[2004]: 2026-01-13 23:45:39.7358 INFO http_proxy: Jan 13 23:45:40.034369 coreos-metadata[2064]: Jan 13 23:45:40.033 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 23:45:40.039403 coreos-metadata[2064]: Jan 13 23:45:40.037 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 23:45:40.044751 coreos-metadata[2064]: Jan 13 23:45:40.043 INFO Fetch successful Jan 13 23:45:40.044751 coreos-metadata[2064]: Jan 13 23:45:40.043 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 23:45:40.046081 coreos-metadata[2064]: Jan 13 23:45:40.045 INFO Fetch successful Jan 13 23:45:40.053992 unknown[2064]: wrote ssh authorized keys file for user: core Jan 13 23:45:40.084160 amazon-ssm-agent[2004]: 2026-01-13 23:45:39.7358 INFO no_proxy: Jan 13 23:45:40.132762 update-ssh-keys[2146]: Updated "/home/core/.ssh/authorized_keys" Jan 13 23:45:40.136119 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 23:45:40.147807 systemd[1]: Finished sshkeys.service. Jan 13 23:45:40.188454 amazon-ssm-agent[2004]: 2026-01-13 23:45:39.7361 INFO Checking if agent identity type OnPrem can be assumed Jan 13 23:45:40.205912 containerd[1984]: time="2026-01-13T23:45:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 13 23:45:40.213325 containerd[1984]: time="2026-01-13T23:45:40.213263755Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 13 23:45:40.289760 amazon-ssm-agent[2004]: 2026-01-13 23:45:39.7362 INFO Checking if agent identity type EC2 can be assumed Jan 13 23:45:40.296098 containerd[1984]: time="2026-01-13T23:45:40.295764860Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.976µs" Jan 13 23:45:40.296098 containerd[1984]: time="2026-01-13T23:45:40.295825700Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 13 23:45:40.296098 containerd[1984]: time="2026-01-13T23:45:40.295900340Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 13 23:45:40.296098 containerd[1984]: time="2026-01-13T23:45:40.295930388Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 13 23:45:40.316794 containerd[1984]: time="2026-01-13T23:45:40.313453628Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 13 23:45:40.318143 containerd[1984]: time="2026-01-13T23:45:40.316166204Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 13 23:45:40.319956 containerd[1984]: time="2026-01-13T23:45:40.319870808Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 13 23:45:40.319956 containerd[1984]: time="2026-01-13T23:45:40.319947752Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 13 23:45:40.326011 containerd[1984]: time="2026-01-13T23:45:40.321642008Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 13 23:45:40.326011 containerd[1984]: time="2026-01-13T23:45:40.323139764Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 13 23:45:40.326011 containerd[1984]: time="2026-01-13T23:45:40.323230796Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 13 23:45:40.326011 containerd[1984]: time="2026-01-13T23:45:40.323283548Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 13 23:45:40.326011 containerd[1984]: time="2026-01-13T23:45:40.323955788Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 13 23:45:40.326011 containerd[1984]: time="2026-01-13T23:45:40.324015020Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 13 23:45:40.326011 containerd[1984]: time="2026-01-13T23:45:40.324883652Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 13 23:45:40.326011 containerd[1984]: time="2026-01-13T23:45:40.325576964Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 13 23:45:40.326011 containerd[1984]: time="2026-01-13T23:45:40.325785440Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 13 23:45:40.326011 containerd[1984]: time="2026-01-13T23:45:40.325841888Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 13 23:45:40.326011 containerd[1984]: time="2026-01-13T23:45:40.325939592Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 13 23:45:40.327513 containerd[1984]: time="2026-01-13T23:45:40.326696732Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 13 23:45:40.327513 containerd[1984]: time="2026-01-13T23:45:40.326900264Z" level=info msg="metadata content store policy set" policy=shared Jan 13 23:45:40.337714 containerd[1984]: time="2026-01-13T23:45:40.334734368Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 13 23:45:40.337714 containerd[1984]: time="2026-01-13T23:45:40.334842164Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 13 23:45:40.337714 containerd[1984]: time="2026-01-13T23:45:40.334985972Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 13 23:45:40.337714 containerd[1984]: time="2026-01-13T23:45:40.335013032Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 13 23:45:40.337714 containerd[1984]: time="2026-01-13T23:45:40.335044472Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 13 23:45:40.337714 containerd[1984]: time="2026-01-13T23:45:40.335112308Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 13 23:45:40.337714 containerd[1984]: time="2026-01-13T23:45:40.335143580Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 13 23:45:40.337714 containerd[1984]: time="2026-01-13T23:45:40.335183744Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 13 23:45:40.337714 containerd[1984]: time="2026-01-13T23:45:40.335226128Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 13 23:45:40.337714 containerd[1984]: time="2026-01-13T23:45:40.335257880Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 13 23:45:40.337714 containerd[1984]: time="2026-01-13T23:45:40.335286308Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 13 23:45:40.337714 containerd[1984]: time="2026-01-13T23:45:40.335336984Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 13 23:45:40.337714 containerd[1984]: time="2026-01-13T23:45:40.335365628Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 13 23:45:40.337714 containerd[1984]: time="2026-01-13T23:45:40.335394656Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 13 23:45:40.338366 containerd[1984]: time="2026-01-13T23:45:40.335620388Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 13 23:45:40.338366 containerd[1984]: time="2026-01-13T23:45:40.335659508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 13 23:45:40.338366 containerd[1984]: time="2026-01-13T23:45:40.335699192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 13 23:45:40.338366 containerd[1984]: time="2026-01-13T23:45:40.335746928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 13 23:45:40.338366 containerd[1984]: time="2026-01-13T23:45:40.335778560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 13 23:45:40.338366 containerd[1984]: time="2026-01-13T23:45:40.335804780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 13 23:45:40.338366 containerd[1984]: time="2026-01-13T23:45:40.335835416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 13 23:45:40.338366 containerd[1984]: time="2026-01-13T23:45:40.335874224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 13 23:45:40.338366 containerd[1984]: time="2026-01-13T23:45:40.335902772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 13 23:45:40.338366 containerd[1984]: time="2026-01-13T23:45:40.335929160Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 13 23:45:40.338366 containerd[1984]: time="2026-01-13T23:45:40.335954336Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 13 23:45:40.338366 containerd[1984]: time="2026-01-13T23:45:40.336012188Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 13 23:45:40.338366 containerd[1984]: time="2026-01-13T23:45:40.336132152Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 13 23:45:40.338366 containerd[1984]: time="2026-01-13T23:45:40.336174044Z" level=info msg="Start snapshots syncer" Jan 13 23:45:40.338366 containerd[1984]: time="2026-01-13T23:45:40.336221624Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 13 23:45:40.338945 containerd[1984]: time="2026-01-13T23:45:40.336687860Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 13 23:45:40.338945 containerd[1984]: time="2026-01-13T23:45:40.336772652Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 13 23:45:40.339200 containerd[1984]: time="2026-01-13T23:45:40.336862700Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 13 23:45:40.339200 containerd[1984]: time="2026-01-13T23:45:40.337055576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 13 23:45:40.339200 containerd[1984]: time="2026-01-13T23:45:40.337138040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 13 23:45:40.339200 containerd[1984]: time="2026-01-13T23:45:40.337166096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 13 23:45:40.339200 containerd[1984]: time="2026-01-13T23:45:40.337196120Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 13 23:45:40.339200 containerd[1984]: time="2026-01-13T23:45:40.337225712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 13 23:45:40.339200 containerd[1984]: time="2026-01-13T23:45:40.337252472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 13 23:45:40.339200 containerd[1984]: time="2026-01-13T23:45:40.337279952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 13 23:45:40.339200 containerd[1984]: time="2026-01-13T23:45:40.337306232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 13 23:45:40.339200 containerd[1984]: time="2026-01-13T23:45:40.337332548Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 13 23:45:40.339200 containerd[1984]: time="2026-01-13T23:45:40.337400072Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 13 23:45:40.339200 containerd[1984]: time="2026-01-13T23:45:40.337430732Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 13 23:45:40.339200 containerd[1984]: time="2026-01-13T23:45:40.337453448Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 13 23:45:40.339735 containerd[1984]: time="2026-01-13T23:45:40.337479416Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 13 23:45:40.339735 containerd[1984]: time="2026-01-13T23:45:40.337500908Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 13 23:45:40.339735 containerd[1984]: time="2026-01-13T23:45:40.337525316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 13 23:45:40.339735 containerd[1984]: time="2026-01-13T23:45:40.337552136Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 13 23:45:40.339735 containerd[1984]: time="2026-01-13T23:45:40.337715120Z" level=info msg="runtime interface created" Jan 13 23:45:40.339735 containerd[1984]: time="2026-01-13T23:45:40.337735352Z" level=info msg="created NRI interface" Jan 13 23:45:40.339735 containerd[1984]: time="2026-01-13T23:45:40.337756808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 13 23:45:40.339735 containerd[1984]: time="2026-01-13T23:45:40.337787768Z" level=info msg="Connect containerd service" Jan 13 23:45:40.339735 containerd[1984]: time="2026-01-13T23:45:40.337844072Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 23:45:40.348958 containerd[1984]: time="2026-01-13T23:45:40.340141664Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 23:45:40.403093 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.2166 INFO Agent will take identity from EC2 Jan 13 23:45:40.501902 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.2346 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 13 23:45:40.534156 polkitd[2105]: Started polkitd version 126 Jan 13 23:45:40.572395 polkitd[2105]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 23:45:40.573465 polkitd[2105]: Loading rules from directory /run/polkit-1/rules.d Jan 13 23:45:40.573633 polkitd[2105]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 13 23:45:40.575789 polkitd[2105]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 13 23:45:40.575954 polkitd[2105]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 13 23:45:40.576143 polkitd[2105]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 23:45:40.577654 polkitd[2105]: Finished loading, compiling and executing 2 rules Jan 13 23:45:40.578437 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 23:45:40.582765 dbus-daemon[1933]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 23:45:40.587080 polkitd[2105]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 23:45:40.600274 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.2346 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 13 23:45:40.654256 systemd-resolved[1539]: System hostname changed to 'ip-172-31-28-147'. Jan 13 23:45:40.654749 systemd-hostnamed[2007]: Hostname set to (transient) Jan 13 23:45:40.704080 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.2346 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 23:45:40.762194 containerd[1984]: time="2026-01-13T23:45:40.762104206Z" level=info msg="Start subscribing containerd event" Jan 13 23:45:40.762441 containerd[1984]: time="2026-01-13T23:45:40.762399934Z" level=info msg="Start recovering state" Jan 13 23:45:40.762722 containerd[1984]: time="2026-01-13T23:45:40.762680086Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 23:45:40.762801 containerd[1984]: time="2026-01-13T23:45:40.762783754Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 23:45:40.762927 containerd[1984]: time="2026-01-13T23:45:40.762902434Z" level=info msg="Start event monitor" Jan 13 23:45:40.763044 containerd[1984]: time="2026-01-13T23:45:40.763021474Z" level=info msg="Start cni network conf syncer for default" Jan 13 23:45:40.763230 containerd[1984]: time="2026-01-13T23:45:40.763207510Z" level=info msg="Start streaming server" Jan 13 23:45:40.763354 containerd[1984]: time="2026-01-13T23:45:40.763330714Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 13 23:45:40.763450 containerd[1984]: time="2026-01-13T23:45:40.763427218Z" level=info msg="runtime interface starting up..." Jan 13 23:45:40.763587 containerd[1984]: time="2026-01-13T23:45:40.763563130Z" level=info msg="starting plugins..." Jan 13 23:45:40.763713 containerd[1984]: time="2026-01-13T23:45:40.763689898Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 13 23:45:40.764115 containerd[1984]: time="2026-01-13T23:45:40.764088862Z" level=info msg="containerd successfully booted in 0.561269s" Jan 13 23:45:40.764325 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 23:45:40.787089 amazon-ssm-agent[2004]: 2026/01/13 23:45:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:45:40.787089 amazon-ssm-agent[2004]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:45:40.788498 amazon-ssm-agent[2004]: 2026/01/13 23:45:40 processing appconfig overrides Jan 13 23:45:40.803682 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.2346 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 13 23:45:40.827162 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.2346 INFO [Registrar] Starting registrar module Jan 13 23:45:40.827162 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.2582 INFO [EC2Identity] Checking disk for registration info Jan 13 23:45:40.827162 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.2583 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 13 23:45:40.827162 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.2583 INFO [EC2Identity] Generating registration keypair Jan 13 23:45:40.827162 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.7411 INFO [EC2Identity] Checking write access before registering Jan 13 23:45:40.827162 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.7419 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 13 23:45:40.827162 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.7867 INFO [EC2Identity] EC2 registration was successful. Jan 13 23:45:40.827162 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.7868 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 13 23:45:40.827162 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.7869 INFO [CredentialRefresher] credentialRefresher has started Jan 13 23:45:40.827162 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.7869 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 23:45:40.828274 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.8258 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 23:45:40.828274 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.8264 INFO [CredentialRefresher] Credentials ready Jan 13 23:45:40.902190 amazon-ssm-agent[2004]: 2026-01-13 23:45:40.8280 INFO [CredentialRefresher] Next credential rotation will be in 29.999966032 minutes Jan 13 23:45:41.066431 tar[1964]: linux-arm64/README.md Jan 13 23:45:41.091187 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 23:45:41.128280 sshd_keygen[1990]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 23:45:41.169199 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 23:45:41.176497 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 23:45:41.202509 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 23:45:41.204144 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 23:45:41.209921 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 23:45:41.242172 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 23:45:41.250636 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 23:45:41.259403 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 23:45:41.262350 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 23:45:41.857437 amazon-ssm-agent[2004]: 2026-01-13 23:45:41.8573 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 23:45:41.958351 amazon-ssm-agent[2004]: 2026-01-13 23:45:41.8613 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2212) started Jan 13 23:45:42.059252 amazon-ssm-agent[2004]: 2026-01-13 23:45:41.8615 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 23:45:43.992597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:45:43.996858 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 23:45:44.001629 systemd[1]: Startup finished in 4.102s (kernel) + 12.140s (initrd) + 15.059s (userspace) = 31.302s. Jan 13 23:45:44.012816 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:45:45.585937 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 23:45:45.590602 systemd[1]: Started sshd@0-172.31.28.147:22-68.220.241.50:57410.service - OpenSSH per-connection server daemon (68.220.241.50:57410). Jan 13 23:45:46.227363 systemd-resolved[1539]: Clock change detected. Flushing caches. Jan 13 23:45:46.618764 kubelet[2228]: E0113 23:45:46.618606 2228 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:45:46.623485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:45:46.623823 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:45:46.625740 systemd[1]: kubelet.service: Consumed 1.487s CPU time, 260.6M memory peak. Jan 13 23:45:46.658595 sshd[2238]: Accepted publickey for core from 68.220.241.50 port 57410 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:45:46.662781 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:45:46.677066 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 23:45:46.679538 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 23:45:46.692329 systemd-logind[1945]: New session 1 of user core. Jan 13 23:45:46.716390 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 23:45:46.722639 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 23:45:46.748722 (systemd)[2245]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:45:46.754001 systemd-logind[1945]: New session 2 of user core. Jan 13 23:45:47.052730 systemd[2245]: Queued start job for default target default.target. Jan 13 23:45:47.063166 systemd[2245]: Created slice app.slice - User Application Slice. Jan 13 23:45:47.063239 systemd[2245]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 13 23:45:47.063272 systemd[2245]: Reached target paths.target - Paths. Jan 13 23:45:47.063372 systemd[2245]: Reached target timers.target - Timers. Jan 13 23:45:47.067693 systemd[2245]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 23:45:47.069451 systemd[2245]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 13 23:45:47.099842 systemd[2245]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 23:45:47.100056 systemd[2245]: Reached target sockets.target - Sockets. Jan 13 23:45:47.104068 systemd[2245]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 13 23:45:47.104335 systemd[2245]: Reached target basic.target - Basic System. Jan 13 23:45:47.104471 systemd[2245]: Reached target default.target - Main User Target. Jan 13 23:45:47.104596 systemd[2245]: Startup finished in 339ms. Jan 13 23:45:47.105116 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 23:45:47.120206 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 23:45:47.391255 systemd[1]: Started sshd@1-172.31.28.147:22-68.220.241.50:57426.service - OpenSSH per-connection server daemon (68.220.241.50:57426). Jan 13 23:45:47.860304 sshd[2259]: Accepted publickey for core from 68.220.241.50 port 57426 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:45:47.863207 sshd-session[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:45:47.872526 systemd-logind[1945]: New session 3 of user core. Jan 13 23:45:47.882824 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 23:45:48.101815 sshd[2263]: Connection closed by 68.220.241.50 port 57426 Jan 13 23:45:48.102757 sshd-session[2259]: pam_unix(sshd:session): session closed for user core Jan 13 23:45:48.113308 systemd[1]: sshd@1-172.31.28.147:22-68.220.241.50:57426.service: Deactivated successfully. Jan 13 23:45:48.117468 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 23:45:48.121377 systemd-logind[1945]: Session 3 logged out. Waiting for processes to exit. Jan 13 23:45:48.123710 systemd-logind[1945]: Removed session 3. Jan 13 23:45:48.192336 systemd[1]: Started sshd@2-172.31.28.147:22-68.220.241.50:57438.service - OpenSSH per-connection server daemon (68.220.241.50:57438). Jan 13 23:45:48.661484 sshd[2269]: Accepted publickey for core from 68.220.241.50 port 57438 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:45:48.664011 sshd-session[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:45:48.673608 systemd-logind[1945]: New session 4 of user core. Jan 13 23:45:48.683867 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 23:45:48.891564 sshd[2273]: Connection closed by 68.220.241.50 port 57438 Jan 13 23:45:48.891565 sshd-session[2269]: pam_unix(sshd:session): session closed for user core Jan 13 23:45:48.898623 systemd[1]: sshd@2-172.31.28.147:22-68.220.241.50:57438.service: Deactivated successfully. Jan 13 23:45:48.902198 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 23:45:48.906992 systemd-logind[1945]: Session 4 logged out. Waiting for processes to exit. Jan 13 23:45:48.909283 systemd-logind[1945]: Removed session 4. Jan 13 23:45:48.995169 systemd[1]: Started sshd@3-172.31.28.147:22-68.220.241.50:57450.service - OpenSSH per-connection server daemon (68.220.241.50:57450). Jan 13 23:45:49.454576 sshd[2279]: Accepted publickey for core from 68.220.241.50 port 57450 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:45:49.457235 sshd-session[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:45:49.466167 systemd-logind[1945]: New session 5 of user core. Jan 13 23:45:49.480853 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 23:45:49.694585 sshd[2283]: Connection closed by 68.220.241.50 port 57450 Jan 13 23:45:49.695790 sshd-session[2279]: pam_unix(sshd:session): session closed for user core Jan 13 23:45:49.703662 systemd[1]: sshd@3-172.31.28.147:22-68.220.241.50:57450.service: Deactivated successfully. Jan 13 23:45:49.708944 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 23:45:49.712582 systemd-logind[1945]: Session 5 logged out. Waiting for processes to exit. Jan 13 23:45:49.714660 systemd-logind[1945]: Removed session 5. Jan 13 23:45:49.786018 systemd[1]: Started sshd@4-172.31.28.147:22-68.220.241.50:57460.service - OpenSSH per-connection server daemon (68.220.241.50:57460). Jan 13 23:45:50.244881 sshd[2289]: Accepted publickey for core from 68.220.241.50 port 57460 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:45:50.247369 sshd-session[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:45:50.257601 systemd-logind[1945]: New session 6 of user core. Jan 13 23:45:50.267871 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 23:45:50.548479 sudo[2294]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 23:45:50.549161 sudo[2294]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:45:50.560454 sudo[2294]: pam_unix(sudo:session): session closed for user root Jan 13 23:45:50.637903 sshd[2293]: Connection closed by 68.220.241.50 port 57460 Jan 13 23:45:50.639072 sshd-session[2289]: pam_unix(sshd:session): session closed for user core Jan 13 23:45:50.649713 systemd[1]: sshd@4-172.31.28.147:22-68.220.241.50:57460.service: Deactivated successfully. Jan 13 23:45:50.653924 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 23:45:50.658301 systemd-logind[1945]: Session 6 logged out. Waiting for processes to exit. Jan 13 23:45:50.660901 systemd-logind[1945]: Removed session 6. Jan 13 23:45:50.737194 systemd[1]: Started sshd@5-172.31.28.147:22-68.220.241.50:57462.service - OpenSSH per-connection server daemon (68.220.241.50:57462). Jan 13 23:45:51.213256 sshd[2301]: Accepted publickey for core from 68.220.241.50 port 57462 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:45:51.216353 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:45:51.226747 systemd-logind[1945]: New session 7 of user core. Jan 13 23:45:51.237901 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 23:45:51.381265 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 23:45:51.382038 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:45:51.387046 sudo[2307]: pam_unix(sudo:session): session closed for user root Jan 13 23:45:51.400981 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 23:45:51.401805 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:45:51.420468 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 23:45:51.483000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 13 23:45:51.485813 kernel: kauditd_printk_skb: 150 callbacks suppressed Jan 13 23:45:51.485877 kernel: audit: type=1305 audit(1768347951.483:245): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 13 23:45:51.485939 augenrules[2331]: No rules Jan 13 23:45:51.483000 audit[2331]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffcd6b7f0 a2=420 a3=0 items=0 ppid=2312 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:51.490449 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 23:45:51.491685 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 23:45:51.497829 kernel: audit: type=1300 audit(1768347951.483:245): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffcd6b7f0 a2=420 a3=0 items=0 ppid=2312 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:51.497930 kernel: audit: type=1327 audit(1768347951.483:245): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 13 23:45:51.483000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 13 23:45:51.500946 sudo[2306]: pam_unix(sudo:session): session closed for user root Jan 13 23:45:51.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:51.506917 kernel: audit: type=1130 audit(1768347951.489:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:51.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:51.512216 kernel: audit: type=1131 audit(1768347951.489:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:51.512319 kernel: audit: type=1106 audit(1768347951.500:248): pid=2306 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:45:51.500000 audit[2306]: USER_END pid=2306 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:45:51.500000 audit[2306]: CRED_DISP pid=2306 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:45:51.523216 kernel: audit: type=1104 audit(1768347951.500:249): pid=2306 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:45:51.582442 sshd[2305]: Connection closed by 68.220.241.50 port 57462 Jan 13 23:45:51.583436 sshd-session[2301]: pam_unix(sshd:session): session closed for user core Jan 13 23:45:51.586000 audit[2301]: USER_END pid=2301 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:45:51.595791 systemd[1]: sshd@5-172.31.28.147:22-68.220.241.50:57462.service: Deactivated successfully. Jan 13 23:45:51.601906 kernel: audit: type=1106 audit(1768347951.586:250): pid=2301 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:45:51.602047 kernel: audit: type=1104 audit(1768347951.586:251): pid=2301 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:45:51.586000 audit[2301]: CRED_DISP pid=2301 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:45:51.600405 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 23:45:51.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.28.147:22-68.220.241.50:57462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:51.608592 kernel: audit: type=1131 audit(1768347951.595:252): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.28.147:22-68.220.241.50:57462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:51.608249 systemd-logind[1945]: Session 7 logged out. Waiting for processes to exit. Jan 13 23:45:51.610896 systemd-logind[1945]: Removed session 7. Jan 13 23:45:51.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.28.147:22-68.220.241.50:57476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:51.680900 systemd[1]: Started sshd@6-172.31.28.147:22-68.220.241.50:57476.service - OpenSSH per-connection server daemon (68.220.241.50:57476). Jan 13 23:45:52.143000 audit[2340]: USER_ACCT pid=2340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:45:52.145765 sshd[2340]: Accepted publickey for core from 68.220.241.50 port 57476 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:45:52.146000 audit[2340]: CRED_ACQ pid=2340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:45:52.146000 audit[2340]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd48935c0 a2=3 a3=0 items=0 ppid=1 pid=2340 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:52.146000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:45:52.148372 sshd-session[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:45:52.158757 systemd-logind[1945]: New session 8 of user core. Jan 13 23:45:52.170920 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 23:45:52.178000 audit[2340]: USER_START pid=2340 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:45:52.182000 audit[2344]: CRED_ACQ pid=2344 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:45:52.312000 audit[2345]: USER_ACCT pid=2345 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:45:52.313000 audit[2345]: CRED_REFR pid=2345 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:45:52.313000 audit[2345]: USER_START pid=2345 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:45:52.313374 sudo[2345]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 23:45:52.314152 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:45:53.535793 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 23:45:53.552980 (dockerd)[2363]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 23:45:54.621764 dockerd[2363]: time="2026-01-13T23:45:54.621686693Z" level=info msg="Starting up" Jan 13 23:45:54.624457 dockerd[2363]: time="2026-01-13T23:45:54.624179381Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 13 23:45:54.648643 dockerd[2363]: time="2026-01-13T23:45:54.648578453Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 13 23:45:54.679798 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport654533164-merged.mount: Deactivated successfully. Jan 13 23:45:54.718669 dockerd[2363]: time="2026-01-13T23:45:54.718603434Z" level=info msg="Loading containers: start." Jan 13 23:45:54.733547 kernel: Initializing XFRM netlink socket Jan 13 23:45:54.865000 audit[2414]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:54.865000 audit[2414]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc478fe90 a2=0 a3=0 items=0 ppid=2363 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:54.865000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 13 23:45:54.869000 audit[2416]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:54.869000 audit[2416]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe9b07350 a2=0 a3=0 items=0 ppid=2363 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:54.869000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 13 23:45:54.873000 audit[2418]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:54.873000 audit[2418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd97eb780 a2=0 a3=0 items=0 ppid=2363 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:54.873000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 13 23:45:54.878000 audit[2420]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:54.878000 audit[2420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee87c1c0 a2=0 a3=0 items=0 ppid=2363 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:54.878000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 13 23:45:54.882000 audit[2422]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:54.882000 audit[2422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd4871d80 a2=0 a3=0 items=0 ppid=2363 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:54.882000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 13 23:45:54.887000 audit[2424]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:54.887000 audit[2424]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd3fb4eb0 a2=0 a3=0 items=0 ppid=2363 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:54.887000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 13 23:45:54.891000 audit[2426]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:54.891000 audit[2426]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffebcfa0c0 a2=0 a3=0 items=0 ppid=2363 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:54.891000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 13 23:45:54.896000 audit[2428]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:54.896000 audit[2428]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd20515c0 a2=0 a3=0 items=0 ppid=2363 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:54.896000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 13 23:45:55.059000 audit[2431]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:55.059000 audit[2431]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=fffff9487a50 a2=0 a3=0 items=0 ppid=2363 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.059000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 13 23:45:55.064000 audit[2433]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:55.064000 audit[2433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe2191320 a2=0 a3=0 items=0 ppid=2363 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.064000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 13 23:45:55.068000 audit[2435]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:55.068000 audit[2435]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffdf0411d0 a2=0 a3=0 items=0 ppid=2363 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.068000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 13 23:45:55.073000 audit[2437]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:55.073000 audit[2437]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffea949340 a2=0 a3=0 items=0 ppid=2363 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.073000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 13 23:45:55.077000 audit[2439]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:55.077000 audit[2439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc9dfc560 a2=0 a3=0 items=0 ppid=2363 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.077000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 13 23:45:55.159000 audit[2469]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:45:55.159000 audit[2469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe7c0fcd0 a2=0 a3=0 items=0 ppid=2363 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.159000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 13 23:45:55.164000 audit[2471]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:45:55.164000 audit[2471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffda863ad0 a2=0 a3=0 items=0 ppid=2363 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.164000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 13 23:45:55.168000 audit[2473]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:45:55.168000 audit[2473]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd9c0310 a2=0 a3=0 items=0 ppid=2363 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.168000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 13 23:45:55.173000 audit[2475]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:45:55.173000 audit[2475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeafcad30 a2=0 a3=0 items=0 ppid=2363 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.173000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 13 23:45:55.176000 audit[2477]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:45:55.176000 audit[2477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffec11ffb0 a2=0 a3=0 items=0 ppid=2363 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.176000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 13 23:45:55.181000 audit[2479]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:45:55.181000 audit[2479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffec5f46c0 a2=0 a3=0 items=0 ppid=2363 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.181000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 13 23:45:55.185000 audit[2481]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:45:55.185000 audit[2481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffce591db0 a2=0 a3=0 items=0 ppid=2363 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 13 23:45:55.189000 audit[2483]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:45:55.189000 audit[2483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffe2a46680 a2=0 a3=0 items=0 ppid=2363 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.189000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 13 23:45:55.195000 audit[2485]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:45:55.195000 audit[2485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffeb734bb0 a2=0 a3=0 items=0 ppid=2363 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.195000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 13 23:45:55.200000 audit[2487]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:45:55.200000 audit[2487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe757f580 a2=0 a3=0 items=0 ppid=2363 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.200000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 13 23:45:55.207000 audit[2489]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:45:55.207000 audit[2489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffee956c70 a2=0 a3=0 items=0 ppid=2363 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.207000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 13 23:45:55.212000 audit[2491]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:45:55.212000 audit[2491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc3ab6950 a2=0 a3=0 items=0 ppid=2363 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.212000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 13 23:45:55.217000 audit[2493]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:45:55.217000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc05c8990 a2=0 a3=0 items=0 ppid=2363 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.217000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 13 23:45:55.230000 audit[2498]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:55.230000 audit[2498]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe2453e10 a2=0 a3=0 items=0 ppid=2363 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.230000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 13 23:45:55.235000 audit[2500]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:55.235000 audit[2500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd7e6f330 a2=0 a3=0 items=0 ppid=2363 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.235000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 13 23:45:55.239000 audit[2502]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:55.239000 audit[2502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffcb656620 a2=0 a3=0 items=0 ppid=2363 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.239000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 13 23:45:55.244000 audit[2504]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2504 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:45:55.244000 audit[2504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffa489380 a2=0 a3=0 items=0 ppid=2363 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 13 23:45:55.248000 audit[2506]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2506 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:45:55.248000 audit[2506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffcff88ca0 a2=0 a3=0 items=0 ppid=2363 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.248000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 13 23:45:55.252000 audit[2508]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:45:55.252000 audit[2508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe4f0caa0 a2=0 a3=0 items=0 ppid=2363 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.252000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 13 23:45:55.279044 (udev-worker)[2387]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:45:55.294000 audit[2514]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:55.294000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffff63778d0 a2=0 a3=0 items=0 ppid=2363 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.294000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 13 23:45:55.299000 audit[2516]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:55.299000 audit[2516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffcdeab740 a2=0 a3=0 items=0 ppid=2363 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 13 23:45:55.319000 audit[2524]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:55.319000 audit[2524]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffd11a1760 a2=0 a3=0 items=0 ppid=2363 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.319000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 13 23:45:55.339000 audit[2530]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:55.339000 audit[2530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff3487bd0 a2=0 a3=0 items=0 ppid=2363 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.339000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 13 23:45:55.346000 audit[2532]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:55.346000 audit[2532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffff0cf3070 a2=0 a3=0 items=0 ppid=2363 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.346000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 13 23:45:55.351000 audit[2534]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:55.351000 audit[2534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe2722620 a2=0 a3=0 items=0 ppid=2363 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.351000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 13 23:45:55.356000 audit[2536]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:55.356000 audit[2536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe05ab5e0 a2=0 a3=0 items=0 ppid=2363 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.356000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 13 23:45:55.361000 audit[2538]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:45:55.361000 audit[2538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe6b01e90 a2=0 a3=0 items=0 ppid=2363 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:45:55.361000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 13 23:45:55.363022 systemd-networkd[1574]: docker0: Link UP Jan 13 23:45:55.374724 dockerd[2363]: time="2026-01-13T23:45:55.374652197Z" level=info msg="Loading containers: done." Jan 13 23:45:55.435099 dockerd[2363]: time="2026-01-13T23:45:55.432893249Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 23:45:55.435099 dockerd[2363]: time="2026-01-13T23:45:55.433022885Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 13 23:45:55.435099 dockerd[2363]: time="2026-01-13T23:45:55.433328897Z" level=info msg="Initializing buildkit" Jan 13 23:45:55.492622 dockerd[2363]: time="2026-01-13T23:45:55.492373494Z" level=info msg="Completed buildkit initialization" Jan 13 23:45:55.508353 dockerd[2363]: time="2026-01-13T23:45:55.508259550Z" level=info msg="Daemon has completed initialization" Jan 13 23:45:55.508624 dockerd[2363]: time="2026-01-13T23:45:55.508394046Z" level=info msg="API listen on /run/docker.sock" Jan 13 23:45:55.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:55.509720 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 23:45:55.672093 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3691215249-merged.mount: Deactivated successfully. Jan 13 23:45:56.644473 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 23:45:56.648253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:45:57.133938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:45:57.144036 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 13 23:45:57.144183 kernel: audit: type=1130 audit(1768347957.134:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:57.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:45:57.150083 (kubelet)[2585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:45:57.240190 kubelet[2585]: E0113 23:45:57.240104 2585 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:45:57.248124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:45:57.248675 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:45:57.250666 systemd[1]: kubelet.service: Consumed 349ms CPU time, 104.8M memory peak. Jan 13 23:45:57.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:45:57.256580 kernel: audit: type=1131 audit(1768347957.250:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:45:57.680447 containerd[1984]: time="2026-01-13T23:45:57.679955456Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 13 23:45:58.513032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount575168667.mount: Deactivated successfully. Jan 13 23:46:00.064878 containerd[1984]: time="2026-01-13T23:46:00.064812440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:00.067865 containerd[1984]: time="2026-01-13T23:46:00.067727576Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25791094" Jan 13 23:46:00.069919 containerd[1984]: time="2026-01-13T23:46:00.069823328Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:00.079363 containerd[1984]: time="2026-01-13T23:46:00.078646484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:00.081114 containerd[1984]: time="2026-01-13T23:46:00.080773904Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.400749904s" Jan 13 23:46:00.081114 containerd[1984]: time="2026-01-13T23:46:00.080851544Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 13 23:46:00.084233 containerd[1984]: time="2026-01-13T23:46:00.084158420Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 13 23:46:01.611432 containerd[1984]: time="2026-01-13T23:46:01.609944064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:01.612250 containerd[1984]: time="2026-01-13T23:46:01.612183468Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Jan 13 23:46:01.613809 containerd[1984]: time="2026-01-13T23:46:01.613749180Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:01.622812 containerd[1984]: time="2026-01-13T23:46:01.622735992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:01.625092 containerd[1984]: time="2026-01-13T23:46:01.625020696Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.540787528s" Jan 13 23:46:01.625323 containerd[1984]: time="2026-01-13T23:46:01.625285488Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 13 23:46:01.626476 containerd[1984]: time="2026-01-13T23:46:01.626308800Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 13 23:46:02.934539 containerd[1984]: time="2026-01-13T23:46:02.932889410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:02.935553 containerd[1984]: time="2026-01-13T23:46:02.935451506Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=0" Jan 13 23:46:02.936688 containerd[1984]: time="2026-01-13T23:46:02.936630830Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:02.942965 containerd[1984]: time="2026-01-13T23:46:02.942903747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:02.945359 containerd[1984]: time="2026-01-13T23:46:02.945277515Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.318794331s" Jan 13 23:46:02.945359 containerd[1984]: time="2026-01-13T23:46:02.945355551Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 13 23:46:02.946570 containerd[1984]: time="2026-01-13T23:46:02.946483191Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 13 23:46:04.325278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1705803623.mount: Deactivated successfully. Jan 13 23:46:04.906574 containerd[1984]: time="2026-01-13T23:46:04.905565664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:04.907395 containerd[1984]: time="2026-01-13T23:46:04.907315048Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=0" Jan 13 23:46:04.909985 containerd[1984]: time="2026-01-13T23:46:04.909522544Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:04.913494 containerd[1984]: time="2026-01-13T23:46:04.913428952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:04.914955 containerd[1984]: time="2026-01-13T23:46:04.914894932Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.967258697s" Jan 13 23:46:04.915093 containerd[1984]: time="2026-01-13T23:46:04.914951584Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 13 23:46:04.916287 containerd[1984]: time="2026-01-13T23:46:04.915982276Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 13 23:46:05.499100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1578562619.mount: Deactivated successfully. Jan 13 23:46:06.701006 containerd[1984]: time="2026-01-13T23:46:06.700950317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:06.703445 containerd[1984]: time="2026-01-13T23:46:06.703384073Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=0" Jan 13 23:46:06.705258 containerd[1984]: time="2026-01-13T23:46:06.705216185Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:06.710483 containerd[1984]: time="2026-01-13T23:46:06.710418437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:06.712982 containerd[1984]: time="2026-01-13T23:46:06.712917749Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.796881257s" Jan 13 23:46:06.712982 containerd[1984]: time="2026-01-13T23:46:06.712978505Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 13 23:46:06.714027 containerd[1984]: time="2026-01-13T23:46:06.713644649Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 23:46:07.164445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount303830202.mount: Deactivated successfully. Jan 13 23:46:07.175493 containerd[1984]: time="2026-01-13T23:46:07.175411504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:46:07.178706 containerd[1984]: time="2026-01-13T23:46:07.178631776Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 13 23:46:07.180668 containerd[1984]: time="2026-01-13T23:46:07.180610660Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:46:07.186147 containerd[1984]: time="2026-01-13T23:46:07.186073240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:46:07.188748 containerd[1984]: time="2026-01-13T23:46:07.188678584Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 474.983283ms" Jan 13 23:46:07.188748 containerd[1984]: time="2026-01-13T23:46:07.188736532Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 13 23:46:07.189393 containerd[1984]: time="2026-01-13T23:46:07.189334360Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 13 23:46:07.394973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 23:46:07.398263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:46:07.788579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:46:07.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:46:07.799559 kernel: audit: type=1130 audit(1768347967.788:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:46:07.802758 (kubelet)[2732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:46:07.895932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2950137801.mount: Deactivated successfully. Jan 13 23:46:07.920401 kubelet[2732]: E0113 23:46:07.920255 2732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:46:07.926745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:46:07.927065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:46:07.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:46:07.928052 systemd[1]: kubelet.service: Consumed 345ms CPU time, 105.7M memory peak. Jan 13 23:46:07.935551 kernel: audit: type=1131 audit(1768347967.927:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:46:10.169553 containerd[1984]: time="2026-01-13T23:46:10.169091970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:10.171851 containerd[1984]: time="2026-01-13T23:46:10.171766674Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57926377" Jan 13 23:46:10.174549 containerd[1984]: time="2026-01-13T23:46:10.174108726Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:10.181564 containerd[1984]: time="2026-01-13T23:46:10.181459410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:10.183799 containerd[1984]: time="2026-01-13T23:46:10.183752646Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.99436017s" Jan 13 23:46:10.183970 containerd[1984]: time="2026-01-13T23:46:10.183941334Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 13 23:46:10.994981 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 23:46:10.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:46:11.007566 kernel: audit: type=1131 audit(1768347970.995:307): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:46:11.016000 audit: BPF prog-id=66 op=UNLOAD Jan 13 23:46:11.018545 kernel: audit: type=1334 audit(1768347971.016:308): prog-id=66 op=UNLOAD Jan 13 23:46:18.144556 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 23:46:18.149075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:46:18.499122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:46:18.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:46:18.510560 kernel: audit: type=1130 audit(1768347978.499:309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:46:18.513961 (kubelet)[2826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:46:18.586720 kubelet[2826]: E0113 23:46:18.586662 2826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:46:18.591383 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:46:18.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:46:18.592650 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:46:18.593595 systemd[1]: kubelet.service: Consumed 293ms CPU time, 105.2M memory peak. Jan 13 23:46:18.598574 kernel: audit: type=1131 audit(1768347978.592:310): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:46:19.579051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:46:19.579442 systemd[1]: kubelet.service: Consumed 293ms CPU time, 105.2M memory peak. Jan 13 23:46:19.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:46:19.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:46:19.587036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:46:19.591616 kernel: audit: type=1130 audit(1768347979.578:311): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:46:19.591736 kernel: audit: type=1131 audit(1768347979.578:312): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:46:19.658349 systemd[1]: Reload requested from client PID 2840 ('systemctl') (unit session-8.scope)... Jan 13 23:46:19.658382 systemd[1]: Reloading... Jan 13 23:46:19.926560 zram_generator::config[2893]: No configuration found. Jan 13 23:46:20.413258 systemd[1]: Reloading finished in 754 ms. Jan 13 23:46:20.458000 audit: BPF prog-id=70 op=LOAD Jan 13 23:46:20.464540 kernel: audit: type=1334 audit(1768347980.458:313): prog-id=70 op=LOAD Jan 13 23:46:20.464669 kernel: audit: type=1334 audit(1768347980.458:314): prog-id=71 op=LOAD Jan 13 23:46:20.458000 audit: BPF prog-id=71 op=LOAD Jan 13 23:46:20.458000 audit: BPF prog-id=47 op=UNLOAD Jan 13 23:46:20.468210 kernel: audit: type=1334 audit(1768347980.458:315): prog-id=47 op=UNLOAD Jan 13 23:46:20.458000 audit: BPF prog-id=48 op=UNLOAD Jan 13 23:46:20.473114 kernel: audit: type=1334 audit(1768347980.458:316): prog-id=48 op=UNLOAD Jan 13 23:46:20.461000 audit: BPF prog-id=72 op=LOAD Jan 13 23:46:20.476272 kernel: audit: type=1334 audit(1768347980.461:317): prog-id=72 op=LOAD Jan 13 23:46:20.461000 audit: BPF prog-id=58 op=UNLOAD Jan 13 23:46:20.481021 kernel: audit: type=1334 audit(1768347980.461:318): prog-id=58 op=UNLOAD Jan 13 23:46:20.473000 audit: BPF prog-id=73 op=LOAD Jan 13 23:46:20.473000 audit: BPF prog-id=55 op=UNLOAD Jan 13 23:46:20.473000 audit: BPF prog-id=74 op=LOAD Jan 13 23:46:20.473000 audit: BPF prog-id=75 op=LOAD Jan 13 23:46:20.473000 audit: BPF prog-id=56 op=UNLOAD Jan 13 23:46:20.473000 audit: BPF prog-id=57 op=UNLOAD Jan 13 23:46:20.477000 audit: BPF prog-id=76 op=LOAD Jan 13 23:46:20.477000 audit: BPF prog-id=69 op=UNLOAD Jan 13 23:46:20.478000 audit: BPF prog-id=77 op=LOAD Jan 13 23:46:20.478000 audit: BPF prog-id=62 op=UNLOAD Jan 13 23:46:20.485000 audit: BPF prog-id=78 op=LOAD Jan 13 23:46:20.485000 audit: BPF prog-id=63 op=UNLOAD Jan 13 23:46:20.486000 audit: BPF prog-id=79 op=LOAD Jan 13 23:46:20.486000 audit: BPF prog-id=80 op=LOAD Jan 13 23:46:20.486000 audit: BPF prog-id=64 op=UNLOAD Jan 13 23:46:20.486000 audit: BPF prog-id=65 op=UNLOAD Jan 13 23:46:20.488000 audit: BPF prog-id=81 op=LOAD Jan 13 23:46:20.488000 audit: BPF prog-id=49 op=UNLOAD Jan 13 23:46:20.488000 audit: BPF prog-id=82 op=LOAD Jan 13 23:46:20.495000 audit: BPF prog-id=83 op=LOAD Jan 13 23:46:20.495000 audit: BPF prog-id=50 op=UNLOAD Jan 13 23:46:20.495000 audit: BPF prog-id=51 op=UNLOAD Jan 13 23:46:20.496000 audit: BPF prog-id=84 op=LOAD Jan 13 23:46:20.496000 audit: BPF prog-id=52 op=UNLOAD Jan 13 23:46:20.497000 audit: BPF prog-id=85 op=LOAD Jan 13 23:46:20.497000 audit: BPF prog-id=86 op=LOAD Jan 13 23:46:20.497000 audit: BPF prog-id=53 op=UNLOAD Jan 13 23:46:20.497000 audit: BPF prog-id=54 op=UNLOAD Jan 13 23:46:20.499000 audit: BPF prog-id=87 op=LOAD Jan 13 23:46:20.499000 audit: BPF prog-id=59 op=UNLOAD Jan 13 23:46:20.499000 audit: BPF prog-id=88 op=LOAD Jan 13 23:46:20.500000 audit: BPF prog-id=89 op=LOAD Jan 13 23:46:20.500000 audit: BPF prog-id=60 op=UNLOAD Jan 13 23:46:20.500000 audit: BPF prog-id=61 op=UNLOAD Jan 13 23:46:20.529889 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 23:46:20.530073 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 23:46:20.531073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:46:20.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:46:20.531256 systemd[1]: kubelet.service: Consumed 238ms CPU time, 95.1M memory peak. Jan 13 23:46:20.538104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:46:20.885185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:46:20.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:46:20.903044 (kubelet)[2951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 23:46:20.976460 kubelet[2951]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:46:20.976460 kubelet[2951]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 13 23:46:20.976460 kubelet[2951]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:46:20.977040 kubelet[2951]: I0113 23:46:20.976562 2951 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 23:46:23.952103 kubelet[2951]: I0113 23:46:23.952051 2951 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 13 23:46:23.954550 kubelet[2951]: I0113 23:46:23.952771 2951 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 23:46:23.954550 kubelet[2951]: I0113 23:46:23.953197 2951 server.go:956] "Client rotation is on, will bootstrap in background" Jan 13 23:46:24.021325 kubelet[2951]: E0113 23:46:24.021264 2951 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.147:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 13 23:46:24.023217 kubelet[2951]: I0113 23:46:24.023153 2951 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 23:46:24.039853 kubelet[2951]: I0113 23:46:24.039811 2951 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 13 23:46:24.048403 kubelet[2951]: I0113 23:46:24.048362 2951 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 23:46:24.049245 kubelet[2951]: I0113 23:46:24.049168 2951 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 23:46:24.049711 kubelet[2951]: I0113 23:46:24.049228 2951 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-147","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 23:46:24.049892 kubelet[2951]: I0113 23:46:24.049850 2951 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 23:46:24.049892 kubelet[2951]: I0113 23:46:24.049879 2951 container_manager_linux.go:303] "Creating device plugin manager" Jan 13 23:46:24.050244 kubelet[2951]: I0113 23:46:24.050200 2951 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:46:24.057944 kubelet[2951]: I0113 23:46:24.057737 2951 kubelet.go:480] "Attempting to sync node with API server" Jan 13 23:46:24.057944 kubelet[2951]: I0113 23:46:24.057788 2951 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 23:46:24.057944 kubelet[2951]: I0113 23:46:24.057837 2951 kubelet.go:386] "Adding apiserver pod source" Jan 13 23:46:24.061012 kubelet[2951]: I0113 23:46:24.060865 2951 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 23:46:24.067228 kubelet[2951]: E0113 23:46:24.067160 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-147&limit=500&resourceVersion=0\": dial tcp 172.31.28.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 13 23:46:24.067971 kubelet[2951]: I0113 23:46:24.067556 2951 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 13 23:46:24.068926 kubelet[2951]: I0113 23:46:24.068893 2951 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 13 23:46:24.069283 kubelet[2951]: W0113 23:46:24.069263 2951 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 23:46:24.077420 kubelet[2951]: I0113 23:46:24.077388 2951 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 13 23:46:24.077702 kubelet[2951]: I0113 23:46:24.077682 2951 server.go:1289] "Started kubelet" Jan 13 23:46:24.079342 kubelet[2951]: E0113 23:46:24.079284 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 13 23:46:24.079481 kubelet[2951]: I0113 23:46:24.079353 2951 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 23:46:24.084695 kubelet[2951]: I0113 23:46:24.084616 2951 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 23:46:24.085244 kubelet[2951]: I0113 23:46:24.085215 2951 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 23:46:24.085405 kubelet[2951]: I0113 23:46:24.084815 2951 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 23:46:24.085631 kubelet[2951]: I0113 23:46:24.085591 2951 server.go:317] "Adding debug handlers to kubelet server" Jan 13 23:46:24.091101 kubelet[2951]: I0113 23:46:24.091039 2951 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 23:46:24.100288 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 13 23:46:24.100432 kernel: audit: type=1325 audit(1768347984.095:355): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:24.095000 audit[2966]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:24.100993 kubelet[2951]: E0113 23:46:24.097549 2951 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.147:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.147:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-147.188a6f1f5639bfc0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-147,UID:ip-172-31-28-147,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-147,},FirstTimestamp:2026-01-13 23:46:24.07763552 +0000 UTC m=+3.162719501,LastTimestamp:2026-01-13 23:46:24.07763552 +0000 UTC m=+3.162719501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-147,}" Jan 13 23:46:24.095000 audit[2966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc27048a0 a2=0 a3=0 items=0 ppid=2951 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.101541 kernel: audit: type=1300 audit(1768347984.095:355): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc27048a0 a2=0 a3=0 items=0 ppid=2951 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.101868 kubelet[2951]: I0113 23:46:24.101843 2951 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 13 23:46:24.104543 kubelet[2951]: E0113 23:46:24.102263 2951 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-147\" not found" Jan 13 23:46:24.104543 kubelet[2951]: I0113 23:46:24.102743 2951 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 13 23:46:24.104543 kubelet[2951]: I0113 23:46:24.102826 2951 reconciler.go:26] "Reconciler: start to sync state" Jan 13 23:46:24.104543 kubelet[2951]: E0113 23:46:24.103423 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 13 23:46:24.110233 kubelet[2951]: E0113 23:46:24.110153 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-147?timeout=10s\": dial tcp 172.31.28.147:6443: connect: connection refused" interval="200ms" Jan 13 23:46:24.095000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 13 23:46:24.115730 kubelet[2951]: I0113 23:46:24.115654 2951 factory.go:223] Registration of the systemd container factory successfully Jan 13 23:46:24.115969 kubelet[2951]: I0113 23:46:24.115904 2951 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 23:46:24.103000 audit[2967]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:24.117964 kubelet[2951]: E0113 23:46:24.117929 2951 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 23:46:24.120372 kernel: audit: type=1327 audit(1768347984.095:355): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 13 23:46:24.120494 kernel: audit: type=1325 audit(1768347984.103:356): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:24.103000 audit[2967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2341fc0 a2=0 a3=0 items=0 ppid=2951 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.128085 kernel: audit: type=1300 audit(1768347984.103:356): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2341fc0 a2=0 a3=0 items=0 ppid=2951 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.128216 kernel: audit: type=1327 audit(1768347984.103:356): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 13 23:46:24.103000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 13 23:46:24.131723 kubelet[2951]: I0113 23:46:24.131588 2951 factory.go:223] Registration of the containerd container factory successfully Jan 13 23:46:24.116000 audit[2969]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:24.136802 kernel: audit: type=1325 audit(1768347984.116:357): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:24.116000 audit[2969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc7554ac0 a2=0 a3=0 items=0 ppid=2951 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.145063 kernel: audit: type=1300 audit(1768347984.116:357): arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc7554ac0 a2=0 a3=0 items=0 ppid=2951 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.116000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:46:24.154001 kernel: audit: type=1327 audit(1768347984.116:357): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:46:24.154108 kernel: audit: type=1325 audit(1768347984.132:358): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:24.132000 audit[2972]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:24.132000 audit[2972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff8a7c5b0 a2=0 a3=0 items=0 ppid=2951 pid=2972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.132000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:46:24.167187 kubelet[2951]: I0113 23:46:24.167140 2951 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 13 23:46:24.167187 kubelet[2951]: I0113 23:46:24.167173 2951 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 13 23:46:24.167371 kubelet[2951]: I0113 23:46:24.167203 2951 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:46:24.167000 audit[2979]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:24.167000 audit[2979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff1cd7030 a2=0 a3=0 items=0 ppid=2951 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.167000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 13 23:46:24.169216 kubelet[2951]: I0113 23:46:24.169096 2951 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 13 23:46:24.171000 audit[2980]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:24.171000 audit[2980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffec7b1af0 a2=0 a3=0 items=0 ppid=2951 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.171000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 13 23:46:24.173030 kubelet[2951]: I0113 23:46:24.172162 2951 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 13 23:46:24.173030 kubelet[2951]: I0113 23:46:24.172888 2951 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 13 23:46:24.173030 kubelet[2951]: I0113 23:46:24.172927 2951 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 13 23:46:24.173030 kubelet[2951]: I0113 23:46:24.172943 2951 kubelet.go:2436] "Starting kubelet main sync loop" Jan 13 23:46:24.173000 audit[2981]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:24.173000 audit[2981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc2677a40 a2=0 a3=0 items=0 ppid=2951 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.173000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 13 23:46:24.174789 kubelet[2951]: E0113 23:46:24.173415 2951 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 23:46:24.174000 audit[2982]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:24.175737 kubelet[2951]: I0113 23:46:24.175533 2951 policy_none.go:49] "None policy: Start" Jan 13 23:46:24.175737 kubelet[2951]: I0113 23:46:24.175587 2951 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 13 23:46:24.175737 kubelet[2951]: I0113 23:46:24.175622 2951 state_mem.go:35] "Initializing new in-memory state store" Jan 13 23:46:24.174000 audit[2982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd043c410 a2=0 a3=0 items=0 ppid=2951 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.174000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 13 23:46:24.178000 audit[2983]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=2983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:24.178000 audit[2983]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5debdd0 a2=0 a3=0 items=0 ppid=2951 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.178000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 13 23:46:24.182000 audit[2984]: NETFILTER_CFG table=nat:51 family=2 entries=1 op=nft_register_chain pid=2984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:24.182000 audit[2984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc8d4fa90 a2=0 a3=0 items=0 ppid=2951 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.182000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 13 23:46:24.183239 kubelet[2951]: E0113 23:46:24.182927 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 13 23:46:24.184000 audit[2985]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_chain pid=2985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:24.184000 audit[2985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe83d3d10 a2=0 a3=0 items=0 ppid=2951 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.184000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 13 23:46:24.188000 audit[2986]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:24.188000 audit[2986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff11e8020 a2=0 a3=0 items=0 ppid=2951 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.188000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 13 23:46:24.200265 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 23:46:24.204269 kubelet[2951]: E0113 23:46:24.202858 2951 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-147\" not found" Jan 13 23:46:24.218256 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 23:46:24.226133 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 23:46:24.238456 kubelet[2951]: E0113 23:46:24.238122 2951 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 13 23:46:24.239802 kubelet[2951]: I0113 23:46:24.239760 2951 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 23:46:24.239929 kubelet[2951]: I0113 23:46:24.239794 2951 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 23:46:24.241536 kubelet[2951]: I0113 23:46:24.240263 2951 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 23:46:24.243869 kubelet[2951]: E0113 23:46:24.243771 2951 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 13 23:46:24.243869 kubelet[2951]: E0113 23:46:24.243839 2951 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-147\" not found" Jan 13 23:46:24.298845 systemd[1]: Created slice kubepods-burstable-pod62f2ff7a3c3030a8337188ca0d6c04c2.slice - libcontainer container kubepods-burstable-pod62f2ff7a3c3030a8337188ca0d6c04c2.slice. Jan 13 23:46:24.311740 kubelet[2951]: E0113 23:46:24.311648 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-147?timeout=10s\": dial tcp 172.31.28.147:6443: connect: connection refused" interval="400ms" Jan 13 23:46:24.315634 kubelet[2951]: E0113 23:46:24.315536 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-147\" not found" node="ip-172-31-28-147" Jan 13 23:46:24.321547 systemd[1]: Created slice kubepods-burstable-pode786111f2b8949a692806c538ccb2c48.slice - libcontainer container kubepods-burstable-pode786111f2b8949a692806c538ccb2c48.slice. Jan 13 23:46:24.327374 kubelet[2951]: E0113 23:46:24.327297 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-147\" not found" node="ip-172-31-28-147" Jan 13 23:46:24.333679 systemd[1]: Created slice kubepods-burstable-pod0f5de2aa30ea7498304d204c99202f43.slice - libcontainer container kubepods-burstable-pod0f5de2aa30ea7498304d204c99202f43.slice. Jan 13 23:46:24.337344 kubelet[2951]: E0113 23:46:24.337299 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-147\" not found" node="ip-172-31-28-147" Jan 13 23:46:24.342284 kubelet[2951]: I0113 23:46:24.342240 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-147" Jan 13 23:46:24.343212 kubelet[2951]: E0113 23:46:24.343165 2951 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.147:6443/api/v1/nodes\": dial tcp 172.31.28.147:6443: connect: connection refused" node="ip-172-31-28-147" Jan 13 23:46:24.404723 kubelet[2951]: I0113 23:46:24.404671 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e786111f2b8949a692806c538ccb2c48-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-147\" (UID: \"e786111f2b8949a692806c538ccb2c48\") " pod="kube-system/kube-controller-manager-ip-172-31-28-147" Jan 13 23:46:24.404831 kubelet[2951]: I0113 23:46:24.404733 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e786111f2b8949a692806c538ccb2c48-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-147\" (UID: \"e786111f2b8949a692806c538ccb2c48\") " pod="kube-system/kube-controller-manager-ip-172-31-28-147" Jan 13 23:46:24.404831 kubelet[2951]: I0113 23:46:24.404777 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f5de2aa30ea7498304d204c99202f43-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-147\" (UID: \"0f5de2aa30ea7498304d204c99202f43\") " pod="kube-system/kube-scheduler-ip-172-31-28-147" Jan 13 23:46:24.404831 kubelet[2951]: I0113 23:46:24.404815 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62f2ff7a3c3030a8337188ca0d6c04c2-ca-certs\") pod \"kube-apiserver-ip-172-31-28-147\" (UID: \"62f2ff7a3c3030a8337188ca0d6c04c2\") " pod="kube-system/kube-apiserver-ip-172-31-28-147" Jan 13 23:46:24.404995 kubelet[2951]: I0113 23:46:24.404852 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e786111f2b8949a692806c538ccb2c48-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-147\" (UID: \"e786111f2b8949a692806c538ccb2c48\") " pod="kube-system/kube-controller-manager-ip-172-31-28-147" Jan 13 23:46:24.404995 kubelet[2951]: I0113 23:46:24.404889 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62f2ff7a3c3030a8337188ca0d6c04c2-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-147\" (UID: \"62f2ff7a3c3030a8337188ca0d6c04c2\") " pod="kube-system/kube-apiserver-ip-172-31-28-147" Jan 13 23:46:24.404995 kubelet[2951]: I0113 23:46:24.404923 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62f2ff7a3c3030a8337188ca0d6c04c2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-147\" (UID: \"62f2ff7a3c3030a8337188ca0d6c04c2\") " pod="kube-system/kube-apiserver-ip-172-31-28-147" Jan 13 23:46:24.404995 kubelet[2951]: I0113 23:46:24.404959 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e786111f2b8949a692806c538ccb2c48-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-147\" (UID: \"e786111f2b8949a692806c538ccb2c48\") " pod="kube-system/kube-controller-manager-ip-172-31-28-147" Jan 13 23:46:24.405167 kubelet[2951]: I0113 23:46:24.404994 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e786111f2b8949a692806c538ccb2c48-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-147\" (UID: \"e786111f2b8949a692806c538ccb2c48\") " pod="kube-system/kube-controller-manager-ip-172-31-28-147" Jan 13 23:46:24.546343 kubelet[2951]: I0113 23:46:24.546220 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-147" Jan 13 23:46:24.547667 kubelet[2951]: E0113 23:46:24.547609 2951 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.147:6443/api/v1/nodes\": dial tcp 172.31.28.147:6443: connect: connection refused" node="ip-172-31-28-147" Jan 13 23:46:24.617689 containerd[1984]: time="2026-01-13T23:46:24.617588506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-147,Uid:62f2ff7a3c3030a8337188ca0d6c04c2,Namespace:kube-system,Attempt:0,}" Jan 13 23:46:24.629760 containerd[1984]: time="2026-01-13T23:46:24.629706274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-147,Uid:e786111f2b8949a692806c538ccb2c48,Namespace:kube-system,Attempt:0,}" Jan 13 23:46:24.641158 containerd[1984]: time="2026-01-13T23:46:24.640892434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-147,Uid:0f5de2aa30ea7498304d204c99202f43,Namespace:kube-system,Attempt:0,}" Jan 13 23:46:24.673558 containerd[1984]: time="2026-01-13T23:46:24.673109470Z" level=info msg="connecting to shim 488b931fe6dc5923234c8b6bafe546abd6ead34abd56f39a4dcc1140a0dff2a7" address="unix:///run/containerd/s/bcd7bc798216c0be79fe96ff98695fc9f961d0d234c5f875edebbc431848b1ce" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:46:24.706081 containerd[1984]: time="2026-01-13T23:46:24.706030547Z" level=info msg="connecting to shim 078426d531698a8cf4fe86458d3609a5ec721286e8ab99c1befa622d98701a17" address="unix:///run/containerd/s/bc59c77e77f25ce8923894cce3d08b3fae624eb4cdbff68af2eabbd16ca4b3be" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:46:24.715169 kubelet[2951]: E0113 23:46:24.715087 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-147?timeout=10s\": dial tcp 172.31.28.147:6443: connect: connection refused" interval="800ms" Jan 13 23:46:24.745968 containerd[1984]: time="2026-01-13T23:46:24.745888895Z" level=info msg="connecting to shim 004b00c0f3db68828e0356e2b05fc24082e8370ab8f4389eaeaf3b88deeadd1b" address="unix:///run/containerd/s/0150ef3e24f409cb652c5b7650d7f1fd90837fd43089125ce1131c8dddf8fc37" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:46:24.771058 systemd[1]: Started cri-containerd-488b931fe6dc5923234c8b6bafe546abd6ead34abd56f39a4dcc1140a0dff2a7.scope - libcontainer container 488b931fe6dc5923234c8b6bafe546abd6ead34abd56f39a4dcc1140a0dff2a7. Jan 13 23:46:24.801232 systemd[1]: Started cri-containerd-078426d531698a8cf4fe86458d3609a5ec721286e8ab99c1befa622d98701a17.scope - libcontainer container 078426d531698a8cf4fe86458d3609a5ec721286e8ab99c1befa622d98701a17. Jan 13 23:46:24.823000 audit: BPF prog-id=90 op=LOAD Jan 13 23:46:24.826000 audit: BPF prog-id=91 op=LOAD Jan 13 23:46:24.826000 audit[3016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2996 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438386239333166653664633539323332333463386236626166653534 Jan 13 23:46:24.827000 audit: BPF prog-id=91 op=UNLOAD Jan 13 23:46:24.827000 audit[3016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2996 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438386239333166653664633539323332333463386236626166653534 Jan 13 23:46:24.827000 audit: BPF prog-id=92 op=LOAD Jan 13 23:46:24.827000 audit[3016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2996 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438386239333166653664633539323332333463386236626166653534 Jan 13 23:46:24.827000 audit: BPF prog-id=93 op=LOAD Jan 13 23:46:24.827000 audit[3016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2996 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438386239333166653664633539323332333463386236626166653534 Jan 13 23:46:24.829000 audit: BPF prog-id=93 op=UNLOAD Jan 13 23:46:24.829000 audit[3016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2996 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438386239333166653664633539323332333463386236626166653534 Jan 13 23:46:24.829000 audit: BPF prog-id=92 op=UNLOAD Jan 13 23:46:24.829000 audit[3016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2996 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438386239333166653664633539323332333463386236626166653534 Jan 13 23:46:24.829000 audit: BPF prog-id=94 op=LOAD Jan 13 23:46:24.829000 audit[3016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2996 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438386239333166653664633539323332333463386236626166653534 Jan 13 23:46:24.835898 systemd[1]: Started cri-containerd-004b00c0f3db68828e0356e2b05fc24082e8370ab8f4389eaeaf3b88deeadd1b.scope - libcontainer container 004b00c0f3db68828e0356e2b05fc24082e8370ab8f4389eaeaf3b88deeadd1b. Jan 13 23:46:24.842000 audit: BPF prog-id=95 op=LOAD Jan 13 23:46:24.844000 audit: BPF prog-id=96 op=LOAD Jan 13 23:46:24.844000 audit[3044]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=3015 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037383432366435333136393861386366346665383634353864333630 Jan 13 23:46:24.845000 audit: BPF prog-id=96 op=UNLOAD Jan 13 23:46:24.845000 audit[3044]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3015 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037383432366435333136393861386366346665383634353864333630 Jan 13 23:46:24.845000 audit: BPF prog-id=97 op=LOAD Jan 13 23:46:24.845000 audit[3044]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=3015 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037383432366435333136393861386366346665383634353864333630 Jan 13 23:46:24.846000 audit: BPF prog-id=98 op=LOAD Jan 13 23:46:24.846000 audit[3044]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=3015 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037383432366435333136393861386366346665383634353864333630 Jan 13 23:46:24.846000 audit: BPF prog-id=98 op=UNLOAD Jan 13 23:46:24.846000 audit[3044]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3015 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037383432366435333136393861386366346665383634353864333630 Jan 13 23:46:24.846000 audit: BPF prog-id=97 op=UNLOAD Jan 13 23:46:24.846000 audit[3044]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3015 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037383432366435333136393861386366346665383634353864333630 Jan 13 23:46:24.846000 audit: BPF prog-id=99 op=LOAD Jan 13 23:46:24.846000 audit[3044]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=3015 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037383432366435333136393861386366346665383634353864333630 Jan 13 23:46:24.877000 audit: BPF prog-id=100 op=LOAD Jan 13 23:46:24.878000 audit: BPF prog-id=101 op=LOAD Jan 13 23:46:24.878000 audit[3065]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3043 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030346230306330663364623638383238653033353665326230356663 Jan 13 23:46:24.879000 audit: BPF prog-id=101 op=UNLOAD Jan 13 23:46:24.879000 audit[3065]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3043 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030346230306330663364623638383238653033353665326230356663 Jan 13 23:46:24.879000 audit: BPF prog-id=102 op=LOAD Jan 13 23:46:24.879000 audit[3065]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3043 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030346230306330663364623638383238653033353665326230356663 Jan 13 23:46:24.879000 audit: BPF prog-id=103 op=LOAD Jan 13 23:46:24.879000 audit[3065]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3043 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030346230306330663364623638383238653033353665326230356663 Jan 13 23:46:24.879000 audit: BPF prog-id=103 op=UNLOAD Jan 13 23:46:24.879000 audit[3065]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3043 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030346230306330663364623638383238653033353665326230356663 Jan 13 23:46:24.879000 audit: BPF prog-id=102 op=UNLOAD Jan 13 23:46:24.879000 audit[3065]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3043 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030346230306330663364623638383238653033353665326230356663 Jan 13 23:46:24.880000 audit: BPF prog-id=104 op=LOAD Jan 13 23:46:24.880000 audit[3065]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3043 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:24.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030346230306330663364623638383238653033353665326230356663 Jan 13 23:46:24.929159 containerd[1984]: time="2026-01-13T23:46:24.929091852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-147,Uid:62f2ff7a3c3030a8337188ca0d6c04c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"488b931fe6dc5923234c8b6bafe546abd6ead34abd56f39a4dcc1140a0dff2a7\"" Jan 13 23:46:24.943434 containerd[1984]: time="2026-01-13T23:46:24.942212016Z" level=info msg="CreateContainer within sandbox \"488b931fe6dc5923234c8b6bafe546abd6ead34abd56f39a4dcc1140a0dff2a7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 23:46:24.953278 kubelet[2951]: I0113 23:46:24.953214 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-147" Jan 13 23:46:24.954488 kubelet[2951]: E0113 23:46:24.954441 2951 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.147:6443/api/v1/nodes\": dial tcp 172.31.28.147:6443: connect: connection refused" node="ip-172-31-28-147" Jan 13 23:46:24.964722 containerd[1984]: time="2026-01-13T23:46:24.964495332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-147,Uid:e786111f2b8949a692806c538ccb2c48,Namespace:kube-system,Attempt:0,} returns sandbox id \"078426d531698a8cf4fe86458d3609a5ec721286e8ab99c1befa622d98701a17\"" Jan 13 23:46:24.974559 containerd[1984]: time="2026-01-13T23:46:24.973988964Z" level=info msg="CreateContainer within sandbox \"078426d531698a8cf4fe86458d3609a5ec721286e8ab99c1befa622d98701a17\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 23:46:24.975471 containerd[1984]: time="2026-01-13T23:46:24.975429156Z" level=info msg="Container 3b9a614512c8c4ee6b0647dc00ec2464d015c834e1d19773a147a37fe64aa433: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:46:24.997180 containerd[1984]: time="2026-01-13T23:46:24.997112760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-147,Uid:0f5de2aa30ea7498304d204c99202f43,Namespace:kube-system,Attempt:0,} returns sandbox id \"004b00c0f3db68828e0356e2b05fc24082e8370ab8f4389eaeaf3b88deeadd1b\"" Jan 13 23:46:25.001730 update_engine[1946]: I20260113 23:46:25.001580 1946 update_attempter.cc:509] Updating boot flags... Jan 13 23:46:25.006497 containerd[1984]: time="2026-01-13T23:46:25.006388544Z" level=info msg="CreateContainer within sandbox \"004b00c0f3db68828e0356e2b05fc24082e8370ab8f4389eaeaf3b88deeadd1b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 23:46:25.010826 containerd[1984]: time="2026-01-13T23:46:25.010058420Z" level=info msg="Container dad01a1114c6d840a7b4b60001f9c8597697badb3ce9eb73cd3944ea3ea1f29e: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:46:25.013595 containerd[1984]: time="2026-01-13T23:46:25.013481768Z" level=info msg="CreateContainer within sandbox \"488b931fe6dc5923234c8b6bafe546abd6ead34abd56f39a4dcc1140a0dff2a7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3b9a614512c8c4ee6b0647dc00ec2464d015c834e1d19773a147a37fe64aa433\"" Jan 13 23:46:25.015749 containerd[1984]: time="2026-01-13T23:46:25.015700616Z" level=info msg="StartContainer for \"3b9a614512c8c4ee6b0647dc00ec2464d015c834e1d19773a147a37fe64aa433\"" Jan 13 23:46:25.025180 containerd[1984]: time="2026-01-13T23:46:25.024964064Z" level=info msg="connecting to shim 3b9a614512c8c4ee6b0647dc00ec2464d015c834e1d19773a147a37fe64aa433" address="unix:///run/containerd/s/bcd7bc798216c0be79fe96ff98695fc9f961d0d234c5f875edebbc431848b1ce" protocol=ttrpc version=3 Jan 13 23:46:25.041387 containerd[1984]: time="2026-01-13T23:46:25.040903088Z" level=info msg="CreateContainer within sandbox \"078426d531698a8cf4fe86458d3609a5ec721286e8ab99c1befa622d98701a17\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dad01a1114c6d840a7b4b60001f9c8597697badb3ce9eb73cd3944ea3ea1f29e\"" Jan 13 23:46:25.046544 containerd[1984]: time="2026-01-13T23:46:25.046477496Z" level=info msg="StartContainer for \"dad01a1114c6d840a7b4b60001f9c8597697badb3ce9eb73cd3944ea3ea1f29e\"" Jan 13 23:46:25.050576 containerd[1984]: time="2026-01-13T23:46:25.050490068Z" level=info msg="Container c83bfc99f552d075b03e32f27925d564adb343aca58adb0ab88b2819d3c75122: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:46:25.053793 containerd[1984]: time="2026-01-13T23:46:25.053622380Z" level=info msg="connecting to shim dad01a1114c6d840a7b4b60001f9c8597697badb3ce9eb73cd3944ea3ea1f29e" address="unix:///run/containerd/s/bc59c77e77f25ce8923894cce3d08b3fae624eb4cdbff68af2eabbd16ca4b3be" protocol=ttrpc version=3 Jan 13 23:46:25.069405 kubelet[2951]: E0113 23:46:25.069333 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-147&limit=500&resourceVersion=0\": dial tcp 172.31.28.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 13 23:46:25.083314 containerd[1984]: time="2026-01-13T23:46:25.082001480Z" level=info msg="CreateContainer within sandbox \"004b00c0f3db68828e0356e2b05fc24082e8370ab8f4389eaeaf3b88deeadd1b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c83bfc99f552d075b03e32f27925d564adb343aca58adb0ab88b2819d3c75122\"" Jan 13 23:46:25.085993 containerd[1984]: time="2026-01-13T23:46:25.085943721Z" level=info msg="StartContainer for \"c83bfc99f552d075b03e32f27925d564adb343aca58adb0ab88b2819d3c75122\"" Jan 13 23:46:25.099042 systemd[1]: Started cri-containerd-3b9a614512c8c4ee6b0647dc00ec2464d015c834e1d19773a147a37fe64aa433.scope - libcontainer container 3b9a614512c8c4ee6b0647dc00ec2464d015c834e1d19773a147a37fe64aa433. Jan 13 23:46:25.112554 containerd[1984]: time="2026-01-13T23:46:25.112436265Z" level=info msg="connecting to shim c83bfc99f552d075b03e32f27925d564adb343aca58adb0ab88b2819d3c75122" address="unix:///run/containerd/s/0150ef3e24f409cb652c5b7650d7f1fd90837fd43089125ce1131c8dddf8fc37" protocol=ttrpc version=3 Jan 13 23:46:25.152000 audit: BPF prog-id=105 op=LOAD Jan 13 23:46:25.156000 audit: BPF prog-id=106 op=LOAD Jan 13 23:46:25.156000 audit[3131]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2996 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362396136313435313263386334656536623036343764633030656332 Jan 13 23:46:25.156000 audit: BPF prog-id=106 op=UNLOAD Jan 13 23:46:25.156000 audit[3131]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2996 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362396136313435313263386334656536623036343764633030656332 Jan 13 23:46:25.156000 audit: BPF prog-id=107 op=LOAD Jan 13 23:46:25.156000 audit[3131]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2996 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362396136313435313263386334656536623036343764633030656332 Jan 13 23:46:25.156000 audit: BPF prog-id=108 op=LOAD Jan 13 23:46:25.156000 audit[3131]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2996 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362396136313435313263386334656536623036343764633030656332 Jan 13 23:46:25.158000 audit: BPF prog-id=108 op=UNLOAD Jan 13 23:46:25.158000 audit[3131]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2996 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362396136313435313263386334656536623036343764633030656332 Jan 13 23:46:25.158000 audit: BPF prog-id=107 op=UNLOAD Jan 13 23:46:25.158000 audit[3131]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2996 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362396136313435313263386334656536623036343764633030656332 Jan 13 23:46:25.158000 audit: BPF prog-id=109 op=LOAD Jan 13 23:46:25.158000 audit[3131]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2996 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362396136313435313263386334656536623036343764633030656332 Jan 13 23:46:25.246892 systemd[1]: Started cri-containerd-dad01a1114c6d840a7b4b60001f9c8597697badb3ce9eb73cd3944ea3ea1f29e.scope - libcontainer container dad01a1114c6d840a7b4b60001f9c8597697badb3ce9eb73cd3944ea3ea1f29e. Jan 13 23:46:25.305804 systemd[1]: Started cri-containerd-c83bfc99f552d075b03e32f27925d564adb343aca58adb0ab88b2819d3c75122.scope - libcontainer container c83bfc99f552d075b03e32f27925d564adb343aca58adb0ab88b2819d3c75122. Jan 13 23:46:25.350000 audit: BPF prog-id=110 op=LOAD Jan 13 23:46:25.355000 audit: BPF prog-id=111 op=LOAD Jan 13 23:46:25.355000 audit[3138]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000b2180 a2=98 a3=0 items=0 ppid=3015 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461643031613131313463366438343061376234623630303031663963 Jan 13 23:46:25.358000 audit: BPF prog-id=111 op=UNLOAD Jan 13 23:46:25.358000 audit[3138]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3015 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.358000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461643031613131313463366438343061376234623630303031663963 Jan 13 23:46:25.360000 audit: BPF prog-id=112 op=LOAD Jan 13 23:46:25.360000 audit[3138]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000b23e8 a2=98 a3=0 items=0 ppid=3015 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461643031613131313463366438343061376234623630303031663963 Jan 13 23:46:25.360000 audit: BPF prog-id=113 op=LOAD Jan 13 23:46:25.360000 audit[3138]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000b2168 a2=98 a3=0 items=0 ppid=3015 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461643031613131313463366438343061376234623630303031663963 Jan 13 23:46:25.360000 audit: BPF prog-id=113 op=UNLOAD Jan 13 23:46:25.360000 audit[3138]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3015 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461643031613131313463366438343061376234623630303031663963 Jan 13 23:46:25.360000 audit: BPF prog-id=112 op=UNLOAD Jan 13 23:46:25.360000 audit[3138]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3015 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461643031613131313463366438343061376234623630303031663963 Jan 13 23:46:25.360000 audit: BPF prog-id=114 op=LOAD Jan 13 23:46:25.360000 audit[3138]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000b2648 a2=98 a3=0 items=0 ppid=3015 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461643031613131313463366438343061376234623630303031663963 Jan 13 23:46:25.386092 containerd[1984]: time="2026-01-13T23:46:25.385646818Z" level=info msg="StartContainer for \"3b9a614512c8c4ee6b0647dc00ec2464d015c834e1d19773a147a37fe64aa433\" returns successfully" Jan 13 23:46:25.392000 audit: BPF prog-id=115 op=LOAD Jan 13 23:46:25.395000 audit: BPF prog-id=116 op=LOAD Jan 13 23:46:25.395000 audit[3172]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3043 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338336266633939663535326430373562303365333266323739323564 Jan 13 23:46:25.396000 audit: BPF prog-id=116 op=UNLOAD Jan 13 23:46:25.396000 audit[3172]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3043 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338336266633939663535326430373562303365333266323739323564 Jan 13 23:46:25.396000 audit: BPF prog-id=117 op=LOAD Jan 13 23:46:25.396000 audit[3172]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3043 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338336266633939663535326430373562303365333266323739323564 Jan 13 23:46:25.397000 audit: BPF prog-id=118 op=LOAD Jan 13 23:46:25.397000 audit[3172]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3043 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338336266633939663535326430373562303365333266323739323564 Jan 13 23:46:25.398000 audit: BPF prog-id=118 op=UNLOAD Jan 13 23:46:25.398000 audit[3172]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3043 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338336266633939663535326430373562303365333266323739323564 Jan 13 23:46:25.398000 audit: BPF prog-id=117 op=UNLOAD Jan 13 23:46:25.398000 audit[3172]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3043 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338336266633939663535326430373562303365333266323739323564 Jan 13 23:46:25.400000 audit: BPF prog-id=119 op=LOAD Jan 13 23:46:25.400000 audit[3172]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3043 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:25.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338336266633939663535326430373562303365333266323739323564 Jan 13 23:46:25.519244 kubelet[2951]: E0113 23:46:25.516888 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-147?timeout=10s\": dial tcp 172.31.28.147:6443: connect: connection refused" interval="1.6s" Jan 13 23:46:25.548530 kubelet[2951]: E0113 23:46:25.544949 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 13 23:46:25.560068 kubelet[2951]: E0113 23:46:25.559343 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 13 23:46:25.780619 kubelet[2951]: E0113 23:46:25.779077 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 13 23:46:25.793455 kubelet[2951]: I0113 23:46:25.792298 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-147" Jan 13 23:46:25.796267 containerd[1984]: time="2026-01-13T23:46:25.795832908Z" level=info msg="StartContainer for \"dad01a1114c6d840a7b4b60001f9c8597697badb3ce9eb73cd3944ea3ea1f29e\" returns successfully" Jan 13 23:46:25.797329 kubelet[2951]: E0113 23:46:25.797205 2951 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.147:6443/api/v1/nodes\": dial tcp 172.31.28.147:6443: connect: connection refused" node="ip-172-31-28-147" Jan 13 23:46:25.815381 containerd[1984]: time="2026-01-13T23:46:25.814982316Z" level=info msg="StartContainer for \"c83bfc99f552d075b03e32f27925d564adb343aca58adb0ab88b2819d3c75122\" returns successfully" Jan 13 23:46:26.333992 kubelet[2951]: E0113 23:46:26.333929 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-147\" not found" node="ip-172-31-28-147" Jan 13 23:46:26.362551 kubelet[2951]: E0113 23:46:26.361875 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-147\" not found" node="ip-172-31-28-147" Jan 13 23:46:26.367032 kubelet[2951]: E0113 23:46:26.366998 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-147\" not found" node="ip-172-31-28-147" Jan 13 23:46:27.356948 kubelet[2951]: E0113 23:46:27.356895 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-147\" not found" node="ip-172-31-28-147" Jan 13 23:46:27.359684 kubelet[2951]: E0113 23:46:27.359618 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-147\" not found" node="ip-172-31-28-147" Jan 13 23:46:27.360663 kubelet[2951]: E0113 23:46:27.358728 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-147\" not found" node="ip-172-31-28-147" Jan 13 23:46:27.400251 kubelet[2951]: I0113 23:46:27.400187 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-147" Jan 13 23:46:28.360381 kubelet[2951]: E0113 23:46:28.360330 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-147\" not found" node="ip-172-31-28-147" Jan 13 23:46:28.361452 kubelet[2951]: E0113 23:46:28.361389 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-147\" not found" node="ip-172-31-28-147" Jan 13 23:46:29.357173 kubelet[2951]: E0113 23:46:29.357114 2951 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-147\" not found" node="ip-172-31-28-147" Jan 13 23:46:29.365528 kubelet[2951]: E0113 23:46:29.365305 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-147\" not found" node="ip-172-31-28-147" Jan 13 23:46:29.449262 kubelet[2951]: E0113 23:46:29.449105 2951 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-28-147.188a6f1f5639bfc0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-147,UID:ip-172-31-28-147,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-147,},FirstTimestamp:2026-01-13 23:46:24.07763552 +0000 UTC m=+3.162719501,LastTimestamp:2026-01-13 23:46:24.07763552 +0000 UTC m=+3.162719501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-147,}" Jan 13 23:46:29.511208 kubelet[2951]: I0113 23:46:29.511147 2951 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-147" Jan 13 23:46:29.517116 kubelet[2951]: E0113 23:46:29.516943 2951 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-28-147.188a6f1f58a025f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-147,UID:ip-172-31-28-147,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-28-147,},FirstTimestamp:2026-01-13 23:46:24.117900788 +0000 UTC m=+3.202984781,LastTimestamp:2026-01-13 23:46:24.117900788 +0000 UTC m=+3.202984781,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-147,}" Jan 13 23:46:29.603865 kubelet[2951]: I0113 23:46:29.603021 2951 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-147" Jan 13 23:46:29.636269 kubelet[2951]: E0113 23:46:29.635610 2951 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-147\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-147" Jan 13 23:46:29.636269 kubelet[2951]: I0113 23:46:29.635664 2951 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-147" Jan 13 23:46:29.648575 kubelet[2951]: E0113 23:46:29.647764 2951 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-147\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-147" Jan 13 23:46:29.648575 kubelet[2951]: I0113 23:46:29.647816 2951 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-147" Jan 13 23:46:29.663539 kubelet[2951]: E0113 23:46:29.663460 2951 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-147\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-147" Jan 13 23:46:30.081898 kubelet[2951]: I0113 23:46:30.081847 2951 apiserver.go:52] "Watching apiserver" Jan 13 23:46:30.103541 kubelet[2951]: I0113 23:46:30.103442 2951 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 13 23:46:32.347234 systemd[1]: Reload requested from client PID 3415 ('systemctl') (unit session-8.scope)... Jan 13 23:46:32.347260 systemd[1]: Reloading... Jan 13 23:46:32.561551 zram_generator::config[3462]: No configuration found. Jan 13 23:46:32.709956 kubelet[2951]: I0113 23:46:32.709899 2951 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-147" Jan 13 23:46:33.090796 systemd[1]: Reloading finished in 742 ms. Jan 13 23:46:33.135032 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:46:33.152170 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 23:46:33.152876 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:46:33.159532 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 13 23:46:33.159658 kernel: audit: type=1131 audit(1768347993.152:415): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:46:33.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:46:33.158943 systemd[1]: kubelet.service: Consumed 3.982s CPU time, 128.2M memory peak. Jan 13 23:46:33.165432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:46:33.168000 audit: BPF prog-id=120 op=LOAD Jan 13 23:46:33.168000 audit: BPF prog-id=87 op=UNLOAD Jan 13 23:46:33.176301 kernel: audit: type=1334 audit(1768347993.168:416): prog-id=120 op=LOAD Jan 13 23:46:33.176393 kernel: audit: type=1334 audit(1768347993.168:417): prog-id=87 op=UNLOAD Jan 13 23:46:33.168000 audit: BPF prog-id=121 op=LOAD Jan 13 23:46:33.180993 kernel: audit: type=1334 audit(1768347993.168:418): prog-id=121 op=LOAD Jan 13 23:46:33.181068 kernel: audit: type=1334 audit(1768347993.168:419): prog-id=122 op=LOAD Jan 13 23:46:33.168000 audit: BPF prog-id=122 op=LOAD Jan 13 23:46:33.186200 kernel: audit: type=1334 audit(1768347993.168:420): prog-id=88 op=UNLOAD Jan 13 23:46:33.168000 audit: BPF prog-id=88 op=UNLOAD Jan 13 23:46:33.168000 audit: BPF prog-id=89 op=UNLOAD Jan 13 23:46:33.171000 audit: BPF prog-id=123 op=LOAD Jan 13 23:46:33.189822 kernel: audit: type=1334 audit(1768347993.168:421): prog-id=89 op=UNLOAD Jan 13 23:46:33.189912 kernel: audit: type=1334 audit(1768347993.171:422): prog-id=123 op=LOAD Jan 13 23:46:33.171000 audit: BPF prog-id=73 op=UNLOAD Jan 13 23:46:33.172000 audit: BPF prog-id=124 op=LOAD Jan 13 23:46:33.172000 audit: BPF prog-id=125 op=LOAD Jan 13 23:46:33.172000 audit: BPF prog-id=74 op=UNLOAD Jan 13 23:46:33.172000 audit: BPF prog-id=75 op=UNLOAD Jan 13 23:46:33.177000 audit: BPF prog-id=126 op=LOAD Jan 13 23:46:33.177000 audit: BPF prog-id=84 op=UNLOAD Jan 13 23:46:33.177000 audit: BPF prog-id=127 op=LOAD Jan 13 23:46:33.177000 audit: BPF prog-id=128 op=LOAD Jan 13 23:46:33.190587 kernel: audit: type=1334 audit(1768347993.171:423): prog-id=73 op=UNLOAD Jan 13 23:46:33.190653 kernel: audit: type=1334 audit(1768347993.172:424): prog-id=124 op=LOAD Jan 13 23:46:33.177000 audit: BPF prog-id=85 op=UNLOAD Jan 13 23:46:33.177000 audit: BPF prog-id=86 op=UNLOAD Jan 13 23:46:33.178000 audit: BPF prog-id=129 op=LOAD Jan 13 23:46:33.178000 audit: BPF prog-id=81 op=UNLOAD Jan 13 23:46:33.182000 audit: BPF prog-id=130 op=LOAD Jan 13 23:46:33.182000 audit: BPF prog-id=131 op=LOAD Jan 13 23:46:33.182000 audit: BPF prog-id=82 op=UNLOAD Jan 13 23:46:33.182000 audit: BPF prog-id=83 op=UNLOAD Jan 13 23:46:33.183000 audit: BPF prog-id=132 op=LOAD Jan 13 23:46:33.183000 audit: BPF prog-id=76 op=UNLOAD Jan 13 23:46:33.190000 audit: BPF prog-id=133 op=LOAD Jan 13 23:46:33.190000 audit: BPF prog-id=72 op=UNLOAD Jan 13 23:46:33.198000 audit: BPF prog-id=134 op=LOAD Jan 13 23:46:33.198000 audit: BPF prog-id=77 op=UNLOAD Jan 13 23:46:33.199000 audit: BPF prog-id=135 op=LOAD Jan 13 23:46:33.199000 audit: BPF prog-id=136 op=LOAD Jan 13 23:46:33.199000 audit: BPF prog-id=70 op=UNLOAD Jan 13 23:46:33.199000 audit: BPF prog-id=71 op=UNLOAD Jan 13 23:46:33.201000 audit: BPF prog-id=137 op=LOAD Jan 13 23:46:33.205000 audit: BPF prog-id=78 op=UNLOAD Jan 13 23:46:33.205000 audit: BPF prog-id=138 op=LOAD Jan 13 23:46:33.205000 audit: BPF prog-id=139 op=LOAD Jan 13 23:46:33.205000 audit: BPF prog-id=79 op=UNLOAD Jan 13 23:46:33.205000 audit: BPF prog-id=80 op=UNLOAD Jan 13 23:46:33.563990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:46:33.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:46:33.584126 (kubelet)[3522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 23:46:33.685029 kubelet[3522]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:46:33.686353 kubelet[3522]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 13 23:46:33.686353 kubelet[3522]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:46:33.686353 kubelet[3522]: I0113 23:46:33.685750 3522 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 23:46:33.706623 kubelet[3522]: I0113 23:46:33.706572 3522 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 13 23:46:33.706849 kubelet[3522]: I0113 23:46:33.706827 3522 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 23:46:33.707409 kubelet[3522]: I0113 23:46:33.707383 3522 server.go:956] "Client rotation is on, will bootstrap in background" Jan 13 23:46:33.709989 kubelet[3522]: I0113 23:46:33.709948 3522 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 13 23:46:33.717450 kubelet[3522]: I0113 23:46:33.717385 3522 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 23:46:33.729621 kubelet[3522]: I0113 23:46:33.729555 3522 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 13 23:46:33.737793 kubelet[3522]: I0113 23:46:33.737725 3522 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 23:46:33.738486 kubelet[3522]: I0113 23:46:33.738194 3522 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 23:46:33.739253 kubelet[3522]: I0113 23:46:33.738262 3522 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-147","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 23:46:33.739418 kubelet[3522]: I0113 23:46:33.739269 3522 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 23:46:33.739418 kubelet[3522]: I0113 23:46:33.739295 3522 container_manager_linux.go:303] "Creating device plugin manager" Jan 13 23:46:33.739418 kubelet[3522]: I0113 23:46:33.739376 3522 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:46:33.739687 kubelet[3522]: I0113 23:46:33.739666 3522 kubelet.go:480] "Attempting to sync node with API server" Jan 13 23:46:33.739738 kubelet[3522]: I0113 23:46:33.739691 3522 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 23:46:33.739793 kubelet[3522]: I0113 23:46:33.739739 3522 kubelet.go:386] "Adding apiserver pod source" Jan 13 23:46:33.739793 kubelet[3522]: I0113 23:46:33.739771 3522 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 23:46:33.745451 kubelet[3522]: I0113 23:46:33.745380 3522 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 13 23:46:33.749419 kubelet[3522]: I0113 23:46:33.749344 3522 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 13 23:46:33.763120 kubelet[3522]: I0113 23:46:33.763072 3522 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 13 23:46:33.763281 kubelet[3522]: I0113 23:46:33.763159 3522 server.go:1289] "Started kubelet" Jan 13 23:46:33.764532 kubelet[3522]: I0113 23:46:33.764099 3522 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 23:46:33.769436 kubelet[3522]: I0113 23:46:33.768629 3522 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 23:46:33.769436 kubelet[3522]: I0113 23:46:33.768751 3522 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 23:46:33.772544 kubelet[3522]: I0113 23:46:33.772443 3522 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 23:46:33.774873 kubelet[3522]: I0113 23:46:33.774810 3522 server.go:317] "Adding debug handlers to kubelet server" Jan 13 23:46:33.778607 kubelet[3522]: I0113 23:46:33.778078 3522 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 23:46:33.779056 kubelet[3522]: I0113 23:46:33.779027 3522 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 13 23:46:33.779651 kubelet[3522]: E0113 23:46:33.779613 3522 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-147\" not found" Jan 13 23:46:33.781078 kubelet[3522]: I0113 23:46:33.781016 3522 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 13 23:46:33.781724 kubelet[3522]: I0113 23:46:33.781694 3522 reconciler.go:26] "Reconciler: start to sync state" Jan 13 23:46:33.797558 kubelet[3522]: I0113 23:46:33.797336 3522 factory.go:223] Registration of the systemd container factory successfully Jan 13 23:46:33.798038 kubelet[3522]: I0113 23:46:33.798002 3522 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 23:46:33.805168 kubelet[3522]: E0113 23:46:33.805105 3522 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 23:46:33.809715 kubelet[3522]: I0113 23:46:33.807977 3522 factory.go:223] Registration of the containerd container factory successfully Jan 13 23:46:33.879160 kubelet[3522]: I0113 23:46:33.879005 3522 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 13 23:46:33.894822 kubelet[3522]: I0113 23:46:33.894666 3522 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 13 23:46:33.894822 kubelet[3522]: I0113 23:46:33.894718 3522 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 13 23:46:33.894822 kubelet[3522]: I0113 23:46:33.894752 3522 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 13 23:46:33.894822 kubelet[3522]: I0113 23:46:33.894769 3522 kubelet.go:2436] "Starting kubelet main sync loop" Jan 13 23:46:33.895117 kubelet[3522]: E0113 23:46:33.894840 3522 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 23:46:33.958327 kubelet[3522]: I0113 23:46:33.958169 3522 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 13 23:46:33.958683 kubelet[3522]: I0113 23:46:33.958543 3522 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 13 23:46:33.958683 kubelet[3522]: I0113 23:46:33.958606 3522 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:46:33.959180 kubelet[3522]: I0113 23:46:33.959101 3522 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 23:46:33.959180 kubelet[3522]: I0113 23:46:33.959128 3522 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 23:46:33.959392 kubelet[3522]: I0113 23:46:33.959309 3522 policy_none.go:49] "None policy: Start" Jan 13 23:46:33.959392 kubelet[3522]: I0113 23:46:33.959337 3522 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 13 23:46:33.959392 kubelet[3522]: I0113 23:46:33.959359 3522 state_mem.go:35] "Initializing new in-memory state store" Jan 13 23:46:33.959978 kubelet[3522]: I0113 23:46:33.959869 3522 state_mem.go:75] "Updated machine memory state" Jan 13 23:46:33.969884 kubelet[3522]: E0113 23:46:33.969751 3522 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 13 23:46:33.972550 kubelet[3522]: I0113 23:46:33.971685 3522 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 23:46:33.972550 kubelet[3522]: I0113 23:46:33.971724 3522 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 23:46:33.972550 kubelet[3522]: I0113 23:46:33.972172 3522 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 23:46:33.979682 kubelet[3522]: E0113 23:46:33.979593 3522 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 13 23:46:33.996062 kubelet[3522]: I0113 23:46:33.995994 3522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-147" Jan 13 23:46:33.997254 kubelet[3522]: I0113 23:46:33.997219 3522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-147" Jan 13 23:46:33.998698 kubelet[3522]: I0113 23:46:33.998662 3522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-147" Jan 13 23:46:34.018937 kubelet[3522]: E0113 23:46:34.018689 3522 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-147\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-28-147" Jan 13 23:46:34.089584 kubelet[3522]: I0113 23:46:34.089544 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62f2ff7a3c3030a8337188ca0d6c04c2-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-147\" (UID: \"62f2ff7a3c3030a8337188ca0d6c04c2\") " pod="kube-system/kube-apiserver-ip-172-31-28-147" Jan 13 23:46:34.090584 kubelet[3522]: I0113 23:46:34.089559 3522 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-147" Jan 13 23:46:34.090584 kubelet[3522]: I0113 23:46:34.089936 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62f2ff7a3c3030a8337188ca0d6c04c2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-147\" (UID: \"62f2ff7a3c3030a8337188ca0d6c04c2\") " pod="kube-system/kube-apiserver-ip-172-31-28-147" Jan 13 23:46:34.090584 kubelet[3522]: I0113 23:46:34.089986 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e786111f2b8949a692806c538ccb2c48-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-147\" (UID: \"e786111f2b8949a692806c538ccb2c48\") " pod="kube-system/kube-controller-manager-ip-172-31-28-147" Jan 13 23:46:34.090584 kubelet[3522]: I0113 23:46:34.090026 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e786111f2b8949a692806c538ccb2c48-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-147\" (UID: \"e786111f2b8949a692806c538ccb2c48\") " pod="kube-system/kube-controller-manager-ip-172-31-28-147" Jan 13 23:46:34.090584 kubelet[3522]: I0113 23:46:34.090065 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e786111f2b8949a692806c538ccb2c48-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-147\" (UID: \"e786111f2b8949a692806c538ccb2c48\") " pod="kube-system/kube-controller-manager-ip-172-31-28-147" Jan 13 23:46:34.090584 kubelet[3522]: I0113 23:46:34.090100 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e786111f2b8949a692806c538ccb2c48-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-147\" (UID: \"e786111f2b8949a692806c538ccb2c48\") " pod="kube-system/kube-controller-manager-ip-172-31-28-147" Jan 13 23:46:34.090951 kubelet[3522]: I0113 23:46:34.090138 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e786111f2b8949a692806c538ccb2c48-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-147\" (UID: \"e786111f2b8949a692806c538ccb2c48\") " pod="kube-system/kube-controller-manager-ip-172-31-28-147" Jan 13 23:46:34.090951 kubelet[3522]: I0113 23:46:34.090185 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f5de2aa30ea7498304d204c99202f43-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-147\" (UID: \"0f5de2aa30ea7498304d204c99202f43\") " pod="kube-system/kube-scheduler-ip-172-31-28-147" Jan 13 23:46:34.090951 kubelet[3522]: I0113 23:46:34.090220 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62f2ff7a3c3030a8337188ca0d6c04c2-ca-certs\") pod \"kube-apiserver-ip-172-31-28-147\" (UID: \"62f2ff7a3c3030a8337188ca0d6c04c2\") " pod="kube-system/kube-apiserver-ip-172-31-28-147" Jan 13 23:46:34.109640 kubelet[3522]: I0113 23:46:34.109491 3522 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-147" Jan 13 23:46:34.110619 kubelet[3522]: I0113 23:46:34.109933 3522 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-147" Jan 13 23:46:34.740965 kubelet[3522]: I0113 23:46:34.740898 3522 apiserver.go:52] "Watching apiserver" Jan 13 23:46:34.781632 kubelet[3522]: I0113 23:46:34.781566 3522 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 13 23:46:34.955011 kubelet[3522]: I0113 23:46:34.954941 3522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-147" Jan 13 23:46:34.972474 kubelet[3522]: E0113 23:46:34.972420 3522 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-147\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-147" Jan 13 23:46:35.006323 kubelet[3522]: I0113 23:46:35.005763 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-147" podStartSLOduration=3.005746338 podStartE2EDuration="3.005746338s" podCreationTimestamp="2026-01-13 23:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:46:35.005730318 +0000 UTC m=+1.413439304" watchObservedRunningTime="2026-01-13 23:46:35.005746338 +0000 UTC m=+1.413455300" Jan 13 23:46:35.040711 kubelet[3522]: I0113 23:46:35.040609 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-147" podStartSLOduration=1.040586358 podStartE2EDuration="1.040586358s" podCreationTimestamp="2026-01-13 23:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:46:35.02580987 +0000 UTC m=+1.433518844" watchObservedRunningTime="2026-01-13 23:46:35.040586358 +0000 UTC m=+1.448295332" Jan 13 23:46:35.064036 kubelet[3522]: I0113 23:46:35.063896 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-147" podStartSLOduration=1.063814206 podStartE2EDuration="1.063814206s" podCreationTimestamp="2026-01-13 23:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:46:35.045763326 +0000 UTC m=+1.453472300" watchObservedRunningTime="2026-01-13 23:46:35.063814206 +0000 UTC m=+1.471523192" Jan 13 23:46:36.769967 kubelet[3522]: I0113 23:46:36.769918 3522 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 23:46:36.770958 kubelet[3522]: I0113 23:46:36.770763 3522 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 23:46:36.771017 containerd[1984]: time="2026-01-13T23:46:36.770364311Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 23:46:37.468541 systemd[1]: Created slice kubepods-besteffort-podafe3297f_84fe_4664_aa30_c4964afc2166.slice - libcontainer container kubepods-besteffort-podafe3297f_84fe_4664_aa30_c4964afc2166.slice. Jan 13 23:46:37.516958 kubelet[3522]: I0113 23:46:37.516902 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmjts\" (UniqueName: \"kubernetes.io/projected/afe3297f-84fe-4664-aa30-c4964afc2166-kube-api-access-mmjts\") pod \"kube-proxy-k9f4p\" (UID: \"afe3297f-84fe-4664-aa30-c4964afc2166\") " pod="kube-system/kube-proxy-k9f4p" Jan 13 23:46:37.517219 kubelet[3522]: I0113 23:46:37.516976 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/afe3297f-84fe-4664-aa30-c4964afc2166-kube-proxy\") pod \"kube-proxy-k9f4p\" (UID: \"afe3297f-84fe-4664-aa30-c4964afc2166\") " pod="kube-system/kube-proxy-k9f4p" Jan 13 23:46:37.517219 kubelet[3522]: I0113 23:46:37.517020 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afe3297f-84fe-4664-aa30-c4964afc2166-xtables-lock\") pod \"kube-proxy-k9f4p\" (UID: \"afe3297f-84fe-4664-aa30-c4964afc2166\") " pod="kube-system/kube-proxy-k9f4p" Jan 13 23:46:37.517219 kubelet[3522]: I0113 23:46:37.517055 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afe3297f-84fe-4664-aa30-c4964afc2166-lib-modules\") pod \"kube-proxy-k9f4p\" (UID: \"afe3297f-84fe-4664-aa30-c4964afc2166\") " pod="kube-system/kube-proxy-k9f4p" Jan 13 23:46:37.727130 systemd[1]: Created slice kubepods-besteffort-pod75fc2771_b89d_4cd3_861b_e7d66b68a6b9.slice - libcontainer container kubepods-besteffort-pod75fc2771_b89d_4cd3_861b_e7d66b68a6b9.slice. Jan 13 23:46:37.784098 containerd[1984]: time="2026-01-13T23:46:37.784002564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k9f4p,Uid:afe3297f-84fe-4664-aa30-c4964afc2166,Namespace:kube-system,Attempt:0,}" Jan 13 23:46:37.819854 kubelet[3522]: I0113 23:46:37.819795 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28gk7\" (UniqueName: \"kubernetes.io/projected/75fc2771-b89d-4cd3-861b-e7d66b68a6b9-kube-api-access-28gk7\") pod \"tigera-operator-7dcd859c48-2q6ct\" (UID: \"75fc2771-b89d-4cd3-861b-e7d66b68a6b9\") " pod="tigera-operator/tigera-operator-7dcd859c48-2q6ct" Jan 13 23:46:37.820350 kubelet[3522]: I0113 23:46:37.819872 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/75fc2771-b89d-4cd3-861b-e7d66b68a6b9-var-lib-calico\") pod \"tigera-operator-7dcd859c48-2q6ct\" (UID: \"75fc2771-b89d-4cd3-861b-e7d66b68a6b9\") " pod="tigera-operator/tigera-operator-7dcd859c48-2q6ct" Jan 13 23:46:37.835187 containerd[1984]: time="2026-01-13T23:46:37.835004124Z" level=info msg="connecting to shim 91619992de7266e9124f123d6250dc26d254138048efc43706c93e11bbc59b8d" address="unix:///run/containerd/s/f0c4f22588dd12203e1a5a82b8a572149439f47244fa9cb283f868564ab23aa3" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:46:37.881892 systemd[1]: Started cri-containerd-91619992de7266e9124f123d6250dc26d254138048efc43706c93e11bbc59b8d.scope - libcontainer container 91619992de7266e9124f123d6250dc26d254138048efc43706c93e11bbc59b8d. Jan 13 23:46:37.903000 audit: BPF prog-id=140 op=LOAD Jan 13 23:46:37.904000 audit: BPF prog-id=141 op=LOAD Jan 13 23:46:37.904000 audit[3594]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3583 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:37.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931363139393932646537323636653931323466313233643632353064 Jan 13 23:46:37.904000 audit: BPF prog-id=141 op=UNLOAD Jan 13 23:46:37.904000 audit[3594]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3583 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:37.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931363139393932646537323636653931323466313233643632353064 Jan 13 23:46:37.905000 audit: BPF prog-id=142 op=LOAD Jan 13 23:46:37.905000 audit[3594]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3583 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:37.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931363139393932646537323636653931323466313233643632353064 Jan 13 23:46:37.905000 audit: BPF prog-id=143 op=LOAD Jan 13 23:46:37.905000 audit[3594]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3583 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:37.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931363139393932646537323636653931323466313233643632353064 Jan 13 23:46:37.905000 audit: BPF prog-id=143 op=UNLOAD Jan 13 23:46:37.905000 audit[3594]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3583 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:37.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931363139393932646537323636653931323466313233643632353064 Jan 13 23:46:37.905000 audit: BPF prog-id=142 op=UNLOAD Jan 13 23:46:37.905000 audit[3594]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3583 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:37.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931363139393932646537323636653931323466313233643632353064 Jan 13 23:46:37.905000 audit: BPF prog-id=144 op=LOAD Jan 13 23:46:37.905000 audit[3594]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3583 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:37.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931363139393932646537323636653931323466313233643632353064 Jan 13 23:46:37.944931 containerd[1984]: time="2026-01-13T23:46:37.944815992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k9f4p,Uid:afe3297f-84fe-4664-aa30-c4964afc2166,Namespace:kube-system,Attempt:0,} returns sandbox id \"91619992de7266e9124f123d6250dc26d254138048efc43706c93e11bbc59b8d\"" Jan 13 23:46:37.966488 containerd[1984]: time="2026-01-13T23:46:37.965286288Z" level=info msg="CreateContainer within sandbox \"91619992de7266e9124f123d6250dc26d254138048efc43706c93e11bbc59b8d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 23:46:37.988237 containerd[1984]: time="2026-01-13T23:46:37.988114693Z" level=info msg="Container 0c7d94c7e06157fd7b7d982ccfe4eb1b57dc49808fdcb757a02b91bcf0baa9e7: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:46:38.010326 containerd[1984]: time="2026-01-13T23:46:38.010247241Z" level=info msg="CreateContainer within sandbox \"91619992de7266e9124f123d6250dc26d254138048efc43706c93e11bbc59b8d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0c7d94c7e06157fd7b7d982ccfe4eb1b57dc49808fdcb757a02b91bcf0baa9e7\"" Jan 13 23:46:38.011610 containerd[1984]: time="2026-01-13T23:46:38.011431845Z" level=info msg="StartContainer for \"0c7d94c7e06157fd7b7d982ccfe4eb1b57dc49808fdcb757a02b91bcf0baa9e7\"" Jan 13 23:46:38.015483 containerd[1984]: time="2026-01-13T23:46:38.015432885Z" level=info msg="connecting to shim 0c7d94c7e06157fd7b7d982ccfe4eb1b57dc49808fdcb757a02b91bcf0baa9e7" address="unix:///run/containerd/s/f0c4f22588dd12203e1a5a82b8a572149439f47244fa9cb283f868564ab23aa3" protocol=ttrpc version=3 Jan 13 23:46:38.040783 containerd[1984]: time="2026-01-13T23:46:38.040498089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2q6ct,Uid:75fc2771-b89d-4cd3-861b-e7d66b68a6b9,Namespace:tigera-operator,Attempt:0,}" Jan 13 23:46:38.056954 systemd[1]: Started cri-containerd-0c7d94c7e06157fd7b7d982ccfe4eb1b57dc49808fdcb757a02b91bcf0baa9e7.scope - libcontainer container 0c7d94c7e06157fd7b7d982ccfe4eb1b57dc49808fdcb757a02b91bcf0baa9e7. Jan 13 23:46:38.087158 containerd[1984]: time="2026-01-13T23:46:38.086783073Z" level=info msg="connecting to shim 8a7d0d002ebe6f49ed3bd9d02fc58f011a523d2ed2783d4ed0b46371bf7d10e1" address="unix:///run/containerd/s/786016f39148a78cab08c019cd952fd61c360712d71e68315abc0be9c700a075" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:46:38.136107 systemd[1]: Started cri-containerd-8a7d0d002ebe6f49ed3bd9d02fc58f011a523d2ed2783d4ed0b46371bf7d10e1.scope - libcontainer container 8a7d0d002ebe6f49ed3bd9d02fc58f011a523d2ed2783d4ed0b46371bf7d10e1. Jan 13 23:46:38.149000 audit: BPF prog-id=145 op=LOAD Jan 13 23:46:38.149000 audit[3622]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3583 pid=3622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063376439346337653036313537666437623764393832636366653465 Jan 13 23:46:38.151000 audit: BPF prog-id=146 op=LOAD Jan 13 23:46:38.154982 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 13 23:46:38.155059 kernel: audit: type=1334 audit(1768347998.151:466): prog-id=146 op=LOAD Jan 13 23:46:38.151000 audit[3622]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3583 pid=3622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.163425 kernel: audit: type=1300 audit(1768347998.151:466): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3583 pid=3622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063376439346337653036313537666437623764393832636366653465 Jan 13 23:46:38.171013 kernel: audit: type=1327 audit(1768347998.151:466): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063376439346337653036313537666437623764393832636366653465 Jan 13 23:46:38.172562 kernel: audit: type=1334 audit(1768347998.152:467): prog-id=146 op=UNLOAD Jan 13 23:46:38.152000 audit: BPF prog-id=146 op=UNLOAD Jan 13 23:46:38.152000 audit[3622]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3583 pid=3622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063376439346337653036313537666437623764393832636366653465 Jan 13 23:46:38.187128 kernel: audit: type=1300 audit(1768347998.152:467): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3583 pid=3622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.187236 kernel: audit: type=1327 audit(1768347998.152:467): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063376439346337653036313537666437623764393832636366653465 Jan 13 23:46:38.152000 audit: BPF prog-id=145 op=UNLOAD Jan 13 23:46:38.189996 kernel: audit: type=1334 audit(1768347998.152:468): prog-id=145 op=UNLOAD Jan 13 23:46:38.152000 audit[3622]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3583 pid=3622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.196311 kernel: audit: type=1300 audit(1768347998.152:468): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3583 pid=3622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063376439346337653036313537666437623764393832636366653465 Jan 13 23:46:38.203312 kernel: audit: type=1327 audit(1768347998.152:468): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063376439346337653036313537666437623764393832636366653465 Jan 13 23:46:38.152000 audit: BPF prog-id=147 op=LOAD Jan 13 23:46:38.210440 kernel: audit: type=1334 audit(1768347998.152:469): prog-id=147 op=LOAD Jan 13 23:46:38.152000 audit[3622]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3583 pid=3622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063376439346337653036313537666437623764393832636366653465 Jan 13 23:46:38.204000 audit: BPF prog-id=148 op=LOAD Jan 13 23:46:38.210000 audit: BPF prog-id=149 op=LOAD Jan 13 23:46:38.210000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=3650 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861376430643030326562653666343965643362643964303266633538 Jan 13 23:46:38.211000 audit: BPF prog-id=149 op=UNLOAD Jan 13 23:46:38.211000 audit[3661]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3650 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861376430643030326562653666343965643362643964303266633538 Jan 13 23:46:38.212000 audit: BPF prog-id=150 op=LOAD Jan 13 23:46:38.212000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=3650 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861376430643030326562653666343965643362643964303266633538 Jan 13 23:46:38.213000 audit: BPF prog-id=151 op=LOAD Jan 13 23:46:38.213000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=3650 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861376430643030326562653666343965643362643964303266633538 Jan 13 23:46:38.213000 audit: BPF prog-id=151 op=UNLOAD Jan 13 23:46:38.213000 audit[3661]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3650 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861376430643030326562653666343965643362643964303266633538 Jan 13 23:46:38.213000 audit: BPF prog-id=150 op=UNLOAD Jan 13 23:46:38.213000 audit[3661]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3650 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861376430643030326562653666343965643362643964303266633538 Jan 13 23:46:38.213000 audit: BPF prog-id=152 op=LOAD Jan 13 23:46:38.213000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=3650 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861376430643030326562653666343965643362643964303266633538 Jan 13 23:46:38.243612 containerd[1984]: time="2026-01-13T23:46:38.242749414Z" level=info msg="StartContainer for \"0c7d94c7e06157fd7b7d982ccfe4eb1b57dc49808fdcb757a02b91bcf0baa9e7\" returns successfully" Jan 13 23:46:38.292995 containerd[1984]: time="2026-01-13T23:46:38.292942966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2q6ct,Uid:75fc2771-b89d-4cd3-861b-e7d66b68a6b9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8a7d0d002ebe6f49ed3bd9d02fc58f011a523d2ed2783d4ed0b46371bf7d10e1\"" Jan 13 23:46:38.297834 containerd[1984]: time="2026-01-13T23:46:38.297657574Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 13 23:46:38.528000 audit[3732]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3732 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.528000 audit[3732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc23ad650 a2=0 a3=1 items=0 ppid=3635 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.528000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 13 23:46:38.533000 audit[3731]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3731 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.533000 audit[3731]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdb069ba0 a2=0 a3=1 items=0 ppid=3635 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.533000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 13 23:46:38.536000 audit[3733]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3733 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.536000 audit[3733]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff35166f0 a2=0 a3=1 items=0 ppid=3635 pid=3733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.536000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 13 23:46:38.546000 audit[3737]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3737 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.546000 audit[3737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca31c650 a2=0 a3=1 items=0 ppid=3635 pid=3737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.546000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 13 23:46:38.549000 audit[3738]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3738 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.549000 audit[3738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeb545400 a2=0 a3=1 items=0 ppid=3635 pid=3738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.549000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 13 23:46:38.553000 audit[3739]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3739 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.553000 audit[3739]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeb954340 a2=0 a3=1 items=0 ppid=3635 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.553000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 13 23:46:38.640000 audit[3740]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3740 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.640000 audit[3740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff36b3e00 a2=0 a3=1 items=0 ppid=3635 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.640000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 13 23:46:38.646000 audit[3742]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3742 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.646000 audit[3742]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffdffe8b00 a2=0 a3=1 items=0 ppid=3635 pid=3742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.646000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 13 23:46:38.668000 audit[3745]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3745 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.668000 audit[3745]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc2689840 a2=0 a3=1 items=0 ppid=3635 pid=3745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.668000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 13 23:46:38.672000 audit[3746]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3746 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.672000 audit[3746]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdbdc8010 a2=0 a3=1 items=0 ppid=3635 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.672000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 13 23:46:38.678000 audit[3748]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3748 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.678000 audit[3748]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc188a910 a2=0 a3=1 items=0 ppid=3635 pid=3748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.678000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 13 23:46:38.685000 audit[3749]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3749 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.685000 audit[3749]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc8e8160 a2=0 a3=1 items=0 ppid=3635 pid=3749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.685000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 13 23:46:38.692000 audit[3751]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3751 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.692000 audit[3751]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff8669b90 a2=0 a3=1 items=0 ppid=3635 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.692000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 13 23:46:38.700000 audit[3754]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3754 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.700000 audit[3754]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffddac1520 a2=0 a3=1 items=0 ppid=3635 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.700000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 13 23:46:38.702000 audit[3755]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3755 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.702000 audit[3755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff321f2d0 a2=0 a3=1 items=0 ppid=3635 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 13 23:46:38.707000 audit[3757]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3757 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.707000 audit[3757]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffedcba3b0 a2=0 a3=1 items=0 ppid=3635 pid=3757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 13 23:46:38.710000 audit[3758]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3758 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.710000 audit[3758]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd553e500 a2=0 a3=1 items=0 ppid=3635 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.710000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 13 23:46:38.715000 audit[3760]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3760 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.715000 audit[3760]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcc0e8200 a2=0 a3=1 items=0 ppid=3635 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.715000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 13 23:46:38.723000 audit[3763]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3763 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.723000 audit[3763]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd443d7f0 a2=0 a3=1 items=0 ppid=3635 pid=3763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.723000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 13 23:46:38.732000 audit[3766]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3766 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.732000 audit[3766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffddaad580 a2=0 a3=1 items=0 ppid=3635 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.732000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 13 23:46:38.735000 audit[3767]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3767 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.735000 audit[3767]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd81eb660 a2=0 a3=1 items=0 ppid=3635 pid=3767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.735000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 13 23:46:38.741000 audit[3769]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3769 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.741000 audit[3769]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffffdad3c30 a2=0 a3=1 items=0 ppid=3635 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.741000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:46:38.749000 audit[3772]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3772 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.749000 audit[3772]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe9cd10a0 a2=0 a3=1 items=0 ppid=3635 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.749000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:46:38.751000 audit[3773]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3773 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.751000 audit[3773]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2193c60 a2=0 a3=1 items=0 ppid=3635 pid=3773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.751000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 13 23:46:38.757000 audit[3775]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3775 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:46:38.757000 audit[3775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff7e58630 a2=0 a3=1 items=0 ppid=3635 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 13 23:46:38.799000 audit[3781]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3781 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:46:38.799000 audit[3781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe16f7150 a2=0 a3=1 items=0 ppid=3635 pid=3781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.799000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:46:38.809000 audit[3781]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3781 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:46:38.809000 audit[3781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffe16f7150 a2=0 a3=1 items=0 ppid=3635 pid=3781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.809000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:46:38.812000 audit[3786]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3786 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.812000 audit[3786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc4be6cb0 a2=0 a3=1 items=0 ppid=3635 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.812000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 13 23:46:38.818000 audit[3788]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.818000 audit[3788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd5d28dd0 a2=0 a3=1 items=0 ppid=3635 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.818000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 13 23:46:38.826000 audit[3791]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3791 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.826000 audit[3791]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe2bf3fd0 a2=0 a3=1 items=0 ppid=3635 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.826000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 13 23:46:38.829000 audit[3792]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.829000 audit[3792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff151d200 a2=0 a3=1 items=0 ppid=3635 pid=3792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.829000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 13 23:46:38.834000 audit[3794]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3794 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.834000 audit[3794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd1436030 a2=0 a3=1 items=0 ppid=3635 pid=3794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.834000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 13 23:46:38.836000 audit[3795]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3795 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.836000 audit[3795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde7fb890 a2=0 a3=1 items=0 ppid=3635 pid=3795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.836000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 13 23:46:38.842000 audit[3797]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3797 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.842000 audit[3797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdbb72530 a2=0 a3=1 items=0 ppid=3635 pid=3797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 13 23:46:38.849000 audit[3800]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3800 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.849000 audit[3800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe9b6c380 a2=0 a3=1 items=0 ppid=3635 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.849000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 13 23:46:38.852000 audit[3801]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.852000 audit[3801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb1b5770 a2=0 a3=1 items=0 ppid=3635 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.852000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 13 23:46:38.857000 audit[3803]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3803 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.857000 audit[3803]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff31276a0 a2=0 a3=1 items=0 ppid=3635 pid=3803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.857000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 13 23:46:38.860000 audit[3804]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3804 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.860000 audit[3804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdffe5c80 a2=0 a3=1 items=0 ppid=3635 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.860000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 13 23:46:38.866000 audit[3806]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3806 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.866000 audit[3806]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffefb91e00 a2=0 a3=1 items=0 ppid=3635 pid=3806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.866000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 13 23:46:38.874000 audit[3809]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3809 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.874000 audit[3809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd33336b0 a2=0 a3=1 items=0 ppid=3635 pid=3809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.874000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 13 23:46:38.883000 audit[3812]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3812 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.883000 audit[3812]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeda01800 a2=0 a3=1 items=0 ppid=3635 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.883000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 13 23:46:38.885000 audit[3813]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3813 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.885000 audit[3813]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc4db0170 a2=0 a3=1 items=0 ppid=3635 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.885000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 13 23:46:38.890000 audit[3815]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3815 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.890000 audit[3815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff9883f10 a2=0 a3=1 items=0 ppid=3635 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.890000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:46:38.901000 audit[3818]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3818 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.901000 audit[3818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd8ef71b0 a2=0 a3=1 items=0 ppid=3635 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.901000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:46:38.904000 audit[3819]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3819 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.904000 audit[3819]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffde62330 a2=0 a3=1 items=0 ppid=3635 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.904000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 13 23:46:38.909000 audit[3821]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3821 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.909000 audit[3821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc8771a40 a2=0 a3=1 items=0 ppid=3635 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.909000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 13 23:46:38.912000 audit[3822]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3822 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.912000 audit[3822]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5b66e30 a2=0 a3=1 items=0 ppid=3635 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.912000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 13 23:46:38.917000 audit[3824]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3824 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.917000 audit[3824]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd2b9b720 a2=0 a3=1 items=0 ppid=3635 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.917000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:46:38.924000 audit[3827]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3827 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:46:38.924000 audit[3827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffefc84650 a2=0 a3=1 items=0 ppid=3635 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.924000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:46:38.932000 audit[3829]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3829 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 13 23:46:38.932000 audit[3829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd03d3460 a2=0 a3=1 items=0 ppid=3635 pid=3829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.932000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:46:38.933000 audit[3829]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3829 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 13 23:46:38.933000 audit[3829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd03d3460 a2=0 a3=1 items=0 ppid=3635 pid=3829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:38.933000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:46:39.881401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2490957132.mount: Deactivated successfully. Jan 13 23:46:40.966523 containerd[1984]: time="2026-01-13T23:46:40.966402867Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:40.968540 containerd[1984]: time="2026-01-13T23:46:40.968423991Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 13 23:46:40.970997 containerd[1984]: time="2026-01-13T23:46:40.970883571Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:40.976605 containerd[1984]: time="2026-01-13T23:46:40.976541583Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:46:40.978064 containerd[1984]: time="2026-01-13T23:46:40.978007587Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.680288045s" Jan 13 23:46:40.978226 containerd[1984]: time="2026-01-13T23:46:40.978197187Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 13 23:46:40.986611 containerd[1984]: time="2026-01-13T23:46:40.986289315Z" level=info msg="CreateContainer within sandbox \"8a7d0d002ebe6f49ed3bd9d02fc58f011a523d2ed2783d4ed0b46371bf7d10e1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 23:46:41.014575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3137123654.mount: Deactivated successfully. Jan 13 23:46:41.018552 containerd[1984]: time="2026-01-13T23:46:41.017920656Z" level=info msg="Container 7a207899fefd64bb05bf7c2670674792749a30d94947b22c2f9233f4b9bee0ad: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:46:41.038272 containerd[1984]: time="2026-01-13T23:46:41.038196336Z" level=info msg="CreateContainer within sandbox \"8a7d0d002ebe6f49ed3bd9d02fc58f011a523d2ed2783d4ed0b46371bf7d10e1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7a207899fefd64bb05bf7c2670674792749a30d94947b22c2f9233f4b9bee0ad\"" Jan 13 23:46:41.039533 containerd[1984]: time="2026-01-13T23:46:41.039116928Z" level=info msg="StartContainer for \"7a207899fefd64bb05bf7c2670674792749a30d94947b22c2f9233f4b9bee0ad\"" Jan 13 23:46:41.041290 containerd[1984]: time="2026-01-13T23:46:41.041198400Z" level=info msg="connecting to shim 7a207899fefd64bb05bf7c2670674792749a30d94947b22c2f9233f4b9bee0ad" address="unix:///run/containerd/s/786016f39148a78cab08c019cd952fd61c360712d71e68315abc0be9c700a075" protocol=ttrpc version=3 Jan 13 23:46:41.082863 systemd[1]: Started cri-containerd-7a207899fefd64bb05bf7c2670674792749a30d94947b22c2f9233f4b9bee0ad.scope - libcontainer container 7a207899fefd64bb05bf7c2670674792749a30d94947b22c2f9233f4b9bee0ad. Jan 13 23:46:41.106000 audit: BPF prog-id=153 op=LOAD Jan 13 23:46:41.107000 audit: BPF prog-id=154 op=LOAD Jan 13 23:46:41.107000 audit[3838]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3650 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:41.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761323037383939666566643634626230356266376332363730363734 Jan 13 23:46:41.108000 audit: BPF prog-id=154 op=UNLOAD Jan 13 23:46:41.108000 audit[3838]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3650 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:41.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761323037383939666566643634626230356266376332363730363734 Jan 13 23:46:41.109000 audit: BPF prog-id=155 op=LOAD Jan 13 23:46:41.109000 audit[3838]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3650 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:41.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761323037383939666566643634626230356266376332363730363734 Jan 13 23:46:41.109000 audit: BPF prog-id=156 op=LOAD Jan 13 23:46:41.109000 audit[3838]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3650 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:41.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761323037383939666566643634626230356266376332363730363734 Jan 13 23:46:41.110000 audit: BPF prog-id=156 op=UNLOAD Jan 13 23:46:41.110000 audit[3838]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3650 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:41.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761323037383939666566643634626230356266376332363730363734 Jan 13 23:46:41.110000 audit: BPF prog-id=155 op=UNLOAD Jan 13 23:46:41.110000 audit[3838]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3650 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:41.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761323037383939666566643634626230356266376332363730363734 Jan 13 23:46:41.110000 audit: BPF prog-id=157 op=LOAD Jan 13 23:46:41.110000 audit[3838]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3650 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:41.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761323037383939666566643634626230356266376332363730363734 Jan 13 23:46:41.155649 containerd[1984]: time="2026-01-13T23:46:41.155408736Z" level=info msg="StartContainer for \"7a207899fefd64bb05bf7c2670674792749a30d94947b22c2f9233f4b9bee0ad\" returns successfully" Jan 13 23:46:42.006108 kubelet[3522]: I0113 23:46:42.005889 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k9f4p" podStartSLOduration=5.005867005 podStartE2EDuration="5.005867005s" podCreationTimestamp="2026-01-13 23:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:46:38.994208702 +0000 UTC m=+5.401917700" watchObservedRunningTime="2026-01-13 23:46:42.005867005 +0000 UTC m=+8.413575979" Jan 13 23:46:42.446588 kubelet[3522]: I0113 23:46:42.446063 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-2q6ct" podStartSLOduration=2.7633989740000002 podStartE2EDuration="5.446042535s" podCreationTimestamp="2026-01-13 23:46:37 +0000 UTC" firstStartedPulling="2026-01-13 23:46:38.296787586 +0000 UTC m=+4.704496560" lastFinishedPulling="2026-01-13 23:46:40.979431147 +0000 UTC m=+7.387140121" observedRunningTime="2026-01-13 23:46:42.009957265 +0000 UTC m=+8.417666263" watchObservedRunningTime="2026-01-13 23:46:42.446042535 +0000 UTC m=+8.853751509" Jan 13 23:46:48.016652 sudo[2345]: pam_unix(sudo:session): session closed for user root Jan 13 23:46:48.023161 kernel: kauditd_printk_skb: 199 callbacks suppressed Jan 13 23:46:48.023277 kernel: audit: type=1106 audit(1768348008.015:537): pid=2345 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:46:48.015000 audit[2345]: USER_END pid=2345 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:46:48.015000 audit[2345]: CRED_DISP pid=2345 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:46:48.032143 kernel: audit: type=1104 audit(1768348008.015:538): pid=2345 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:46:48.094749 sshd[2344]: Connection closed by 68.220.241.50 port 57476 Jan 13 23:46:48.095630 sshd-session[2340]: pam_unix(sshd:session): session closed for user core Jan 13 23:46:48.097000 audit[2340]: USER_END pid=2340 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:46:48.104000 audit[2340]: CRED_DISP pid=2340 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:46:48.112293 systemd[1]: sshd@6-172.31.28.147:22-68.220.241.50:57476.service: Deactivated successfully. Jan 13 23:46:48.119236 kernel: audit: type=1106 audit(1768348008.097:539): pid=2340 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:46:48.119534 kernel: audit: type=1104 audit(1768348008.104:540): pid=2340 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:46:48.119626 kernel: audit: type=1131 audit(1768348008.114:541): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.28.147:22-68.220.241.50:57476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:46:48.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.28.147:22-68.220.241.50:57476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:46:48.123167 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 23:46:48.126437 systemd[1]: session-8.scope: Consumed 12.921s CPU time, 223.2M memory peak. Jan 13 23:46:48.138139 systemd-logind[1945]: Session 8 logged out. Waiting for processes to exit. Jan 13 23:46:48.142623 systemd-logind[1945]: Removed session 8. Jan 13 23:46:52.269000 audit[3915]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:46:52.269000 audit[3915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffee12b8c0 a2=0 a3=1 items=0 ppid=3635 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:52.283285 kernel: audit: type=1325 audit(1768348012.269:542): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:46:52.283420 kernel: audit: type=1300 audit(1768348012.269:542): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffee12b8c0 a2=0 a3=1 items=0 ppid=3635 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:52.269000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:46:52.288647 kernel: audit: type=1327 audit(1768348012.269:542): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:46:52.283000 audit[3915]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:46:52.292248 kernel: audit: type=1325 audit(1768348012.283:543): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:46:52.283000 audit[3915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffee12b8c0 a2=0 a3=1 items=0 ppid=3635 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:52.299383 kernel: audit: type=1300 audit(1768348012.283:543): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffee12b8c0 a2=0 a3=1 items=0 ppid=3635 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:52.283000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:46:52.317000 audit[3917]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3917 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:46:52.317000 audit[3917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff4971be0 a2=0 a3=1 items=0 ppid=3635 pid=3917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:52.317000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:46:52.322000 audit[3917]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3917 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:46:52.322000 audit[3917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff4971be0 a2=0 a3=1 items=0 ppid=3635 pid=3917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:46:52.322000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:00.881000 audit[3919]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:00.884985 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 13 23:47:00.885110 kernel: audit: type=1325 audit(1768348020.881:546): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:00.881000 audit[3919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffe6b4940 a2=0 a3=1 items=0 ppid=3635 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:00.898302 kernel: audit: type=1300 audit(1768348020.881:546): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffe6b4940 a2=0 a3=1 items=0 ppid=3635 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:00.881000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:00.908429 kernel: audit: type=1327 audit(1768348020.881:546): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:00.908703 kernel: audit: type=1325 audit(1768348020.891:547): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:00.891000 audit[3919]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:00.891000 audit[3919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe6b4940 a2=0 a3=1 items=0 ppid=3635 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:00.919200 kernel: audit: type=1300 audit(1768348020.891:547): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe6b4940 a2=0 a3=1 items=0 ppid=3635 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:00.919592 kernel: audit: type=1327 audit(1768348020.891:547): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:00.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:00.926000 audit[3921]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:00.926000 audit[3921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff0479980 a2=0 a3=1 items=0 ppid=3635 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:00.938551 kernel: audit: type=1325 audit(1768348020.926:548): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:00.938700 kernel: audit: type=1300 audit(1768348020.926:548): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff0479980 a2=0 a3=1 items=0 ppid=3635 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:00.938756 kernel: audit: type=1327 audit(1768348020.926:548): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:00.926000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:00.941000 audit[3921]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:00.951559 kernel: audit: type=1325 audit(1768348020.941:549): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:00.941000 audit[3921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff0479980 a2=0 a3=1 items=0 ppid=3635 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:00.941000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:02.418000 audit[3923]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:02.418000 audit[3923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc80656d0 a2=0 a3=1 items=0 ppid=3635 pid=3923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:02.418000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:02.430000 audit[3923]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:02.430000 audit[3923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc80656d0 a2=0 a3=1 items=0 ppid=3635 pid=3923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:02.430000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:03.449000 audit[3925]: NETFILTER_CFG table=filter:115 family=2 entries=20 op=nft_register_rule pid=3925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:03.449000 audit[3925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd7915b50 a2=0 a3=1 items=0 ppid=3635 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:03.449000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:03.453000 audit[3925]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:03.453000 audit[3925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd7915b50 a2=0 a3=1 items=0 ppid=3635 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:03.453000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:05.118000 audit[3928]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:05.118000 audit[3928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdf096e00 a2=0 a3=1 items=0 ppid=3635 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:05.118000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:05.123000 audit[3928]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:05.123000 audit[3928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdf096e00 a2=0 a3=1 items=0 ppid=3635 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:05.123000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:05.170000 audit[3930]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3930 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:05.170000 audit[3930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe1744d10 a2=0 a3=1 items=0 ppid=3635 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:05.170000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:05.180000 audit[3930]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3930 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:05.180000 audit[3930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe1744d10 a2=0 a3=1 items=0 ppid=3635 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:05.180000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:05.220071 systemd[1]: Created slice kubepods-besteffort-pod0862e935_3f08_4c78_9c4b_7ca1424a8182.slice - libcontainer container kubepods-besteffort-pod0862e935_3f08_4c78_9c4b_7ca1424a8182.slice. Jan 13 23:47:05.312618 kubelet[3522]: I0113 23:47:05.312373 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0862e935-3f08-4c78-9c4b-7ca1424a8182-typha-certs\") pod \"calico-typha-7c6f6c8587-jc7gj\" (UID: \"0862e935-3f08-4c78-9c4b-7ca1424a8182\") " pod="calico-system/calico-typha-7c6f6c8587-jc7gj" Jan 13 23:47:05.313968 kubelet[3522]: I0113 23:47:05.313196 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zbdl\" (UniqueName: \"kubernetes.io/projected/0862e935-3f08-4c78-9c4b-7ca1424a8182-kube-api-access-4zbdl\") pod \"calico-typha-7c6f6c8587-jc7gj\" (UID: \"0862e935-3f08-4c78-9c4b-7ca1424a8182\") " pod="calico-system/calico-typha-7c6f6c8587-jc7gj" Jan 13 23:47:05.313968 kubelet[3522]: I0113 23:47:05.313333 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0862e935-3f08-4c78-9c4b-7ca1424a8182-tigera-ca-bundle\") pod \"calico-typha-7c6f6c8587-jc7gj\" (UID: \"0862e935-3f08-4c78-9c4b-7ca1424a8182\") " pod="calico-system/calico-typha-7c6f6c8587-jc7gj" Jan 13 23:47:05.453213 systemd[1]: Created slice kubepods-besteffort-pod195a8f42_dbfa_4e92_8f4c_d6680558b5ad.slice - libcontainer container kubepods-besteffort-pod195a8f42_dbfa_4e92_8f4c_d6680558b5ad.slice. Jan 13 23:47:05.514300 kubelet[3522]: I0113 23:47:05.514258 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/195a8f42-dbfa-4e92-8f4c-d6680558b5ad-node-certs\") pod \"calico-node-j5vz9\" (UID: \"195a8f42-dbfa-4e92-8f4c-d6680558b5ad\") " pod="calico-system/calico-node-j5vz9" Jan 13 23:47:05.515032 kubelet[3522]: I0113 23:47:05.514650 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/195a8f42-dbfa-4e92-8f4c-d6680558b5ad-policysync\") pod \"calico-node-j5vz9\" (UID: \"195a8f42-dbfa-4e92-8f4c-d6680558b5ad\") " pod="calico-system/calico-node-j5vz9" Jan 13 23:47:05.515032 kubelet[3522]: I0113 23:47:05.514701 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/195a8f42-dbfa-4e92-8f4c-d6680558b5ad-var-run-calico\") pod \"calico-node-j5vz9\" (UID: \"195a8f42-dbfa-4e92-8f4c-d6680558b5ad\") " pod="calico-system/calico-node-j5vz9" Jan 13 23:47:05.515032 kubelet[3522]: I0113 23:47:05.514746 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/195a8f42-dbfa-4e92-8f4c-d6680558b5ad-var-lib-calico\") pod \"calico-node-j5vz9\" (UID: \"195a8f42-dbfa-4e92-8f4c-d6680558b5ad\") " pod="calico-system/calico-node-j5vz9" Jan 13 23:47:05.515032 kubelet[3522]: I0113 23:47:05.514799 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/195a8f42-dbfa-4e92-8f4c-d6680558b5ad-cni-log-dir\") pod \"calico-node-j5vz9\" (UID: \"195a8f42-dbfa-4e92-8f4c-d6680558b5ad\") " pod="calico-system/calico-node-j5vz9" Jan 13 23:47:05.515032 kubelet[3522]: I0113 23:47:05.514836 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/195a8f42-dbfa-4e92-8f4c-d6680558b5ad-lib-modules\") pod \"calico-node-j5vz9\" (UID: \"195a8f42-dbfa-4e92-8f4c-d6680558b5ad\") " pod="calico-system/calico-node-j5vz9" Jan 13 23:47:05.515328 kubelet[3522]: I0113 23:47:05.514887 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/195a8f42-dbfa-4e92-8f4c-d6680558b5ad-cni-bin-dir\") pod \"calico-node-j5vz9\" (UID: \"195a8f42-dbfa-4e92-8f4c-d6680558b5ad\") " pod="calico-system/calico-node-j5vz9" Jan 13 23:47:05.515328 kubelet[3522]: I0113 23:47:05.514923 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/195a8f42-dbfa-4e92-8f4c-d6680558b5ad-tigera-ca-bundle\") pod \"calico-node-j5vz9\" (UID: \"195a8f42-dbfa-4e92-8f4c-d6680558b5ad\") " pod="calico-system/calico-node-j5vz9" Jan 13 23:47:05.515328 kubelet[3522]: I0113 23:47:05.514961 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/195a8f42-dbfa-4e92-8f4c-d6680558b5ad-cni-net-dir\") pod \"calico-node-j5vz9\" (UID: \"195a8f42-dbfa-4e92-8f4c-d6680558b5ad\") " pod="calico-system/calico-node-j5vz9" Jan 13 23:47:05.515328 kubelet[3522]: I0113 23:47:05.515001 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/195a8f42-dbfa-4e92-8f4c-d6680558b5ad-flexvol-driver-host\") pod \"calico-node-j5vz9\" (UID: \"195a8f42-dbfa-4e92-8f4c-d6680558b5ad\") " pod="calico-system/calico-node-j5vz9" Jan 13 23:47:05.515328 kubelet[3522]: I0113 23:47:05.515090 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/195a8f42-dbfa-4e92-8f4c-d6680558b5ad-xtables-lock\") pod \"calico-node-j5vz9\" (UID: \"195a8f42-dbfa-4e92-8f4c-d6680558b5ad\") " pod="calico-system/calico-node-j5vz9" Jan 13 23:47:05.515614 kubelet[3522]: I0113 23:47:05.515137 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6sll\" (UniqueName: \"kubernetes.io/projected/195a8f42-dbfa-4e92-8f4c-d6680558b5ad-kube-api-access-l6sll\") pod \"calico-node-j5vz9\" (UID: \"195a8f42-dbfa-4e92-8f4c-d6680558b5ad\") " pod="calico-system/calico-node-j5vz9" Jan 13 23:47:05.546879 containerd[1984]: time="2026-01-13T23:47:05.546750877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c6f6c8587-jc7gj,Uid:0862e935-3f08-4c78-9c4b-7ca1424a8182,Namespace:calico-system,Attempt:0,}" Jan 13 23:47:05.613387 kubelet[3522]: E0113 23:47:05.611968 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:47:05.623783 containerd[1984]: time="2026-01-13T23:47:05.623680334Z" level=info msg="connecting to shim 216a45dcfab4d0df18fa0a208b3c956821579c1aa34ff15295868f100756c6e9" address="unix:///run/containerd/s/ff3c8c64e7b7a3ae9f2592042106d490b1064708b0bde8dba4205280858a777f" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:47:05.631702 kubelet[3522]: E0113 23:47:05.630603 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.631702 kubelet[3522]: W0113 23:47:05.630681 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.631702 kubelet[3522]: E0113 23:47:05.630720 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.643559 kubelet[3522]: E0113 23:47:05.640777 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.643864 kubelet[3522]: W0113 23:47:05.643741 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.643864 kubelet[3522]: E0113 23:47:05.643793 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.682525 kubelet[3522]: E0113 23:47:05.682457 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.682684 kubelet[3522]: W0113 23:47:05.682494 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.682684 kubelet[3522]: E0113 23:47:05.682566 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.684534 kubelet[3522]: E0113 23:47:05.683444 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.684534 kubelet[3522]: W0113 23:47:05.683481 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.684534 kubelet[3522]: E0113 23:47:05.683851 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.684534 kubelet[3522]: E0113 23:47:05.684343 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.684534 kubelet[3522]: W0113 23:47:05.684359 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.684534 kubelet[3522]: E0113 23:47:05.684415 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.684941 kubelet[3522]: E0113 23:47:05.684880 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.684941 kubelet[3522]: W0113 23:47:05.684931 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.685049 kubelet[3522]: E0113 23:47:05.684952 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.687188 kubelet[3522]: E0113 23:47:05.685400 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.687188 kubelet[3522]: W0113 23:47:05.685429 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.687188 kubelet[3522]: E0113 23:47:05.685485 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.687188 kubelet[3522]: E0113 23:47:05.685918 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.687188 kubelet[3522]: W0113 23:47:05.685937 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.687188 kubelet[3522]: E0113 23:47:05.685958 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.687188 kubelet[3522]: E0113 23:47:05.686260 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.687188 kubelet[3522]: W0113 23:47:05.686276 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.687188 kubelet[3522]: E0113 23:47:05.686295 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.687188 kubelet[3522]: E0113 23:47:05.686582 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.687774 kubelet[3522]: W0113 23:47:05.686598 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.687774 kubelet[3522]: E0113 23:47:05.686616 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.687774 kubelet[3522]: E0113 23:47:05.686931 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.687774 kubelet[3522]: W0113 23:47:05.686946 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.687774 kubelet[3522]: E0113 23:47:05.686964 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.687774 kubelet[3522]: E0113 23:47:05.687184 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.687774 kubelet[3522]: W0113 23:47:05.687198 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.687774 kubelet[3522]: E0113 23:47:05.687216 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.687774 kubelet[3522]: E0113 23:47:05.687451 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.687774 kubelet[3522]: W0113 23:47:05.687493 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.688213 kubelet[3522]: E0113 23:47:05.687554 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.688213 kubelet[3522]: E0113 23:47:05.687792 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.688213 kubelet[3522]: W0113 23:47:05.687807 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.688213 kubelet[3522]: E0113 23:47:05.687825 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.688213 kubelet[3522]: E0113 23:47:05.688065 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.688213 kubelet[3522]: W0113 23:47:05.688079 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.688213 kubelet[3522]: E0113 23:47:05.688122 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.693018 kubelet[3522]: E0113 23:47:05.688383 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.693018 kubelet[3522]: W0113 23:47:05.688400 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.693018 kubelet[3522]: E0113 23:47:05.688419 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.693018 kubelet[3522]: E0113 23:47:05.689035 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.693018 kubelet[3522]: W0113 23:47:05.689057 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.693018 kubelet[3522]: E0113 23:47:05.689080 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.693018 kubelet[3522]: E0113 23:47:05.689358 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.693018 kubelet[3522]: W0113 23:47:05.689372 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.693018 kubelet[3522]: E0113 23:47:05.689390 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.693018 kubelet[3522]: E0113 23:47:05.690908 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.695029 kubelet[3522]: W0113 23:47:05.690935 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.695029 kubelet[3522]: E0113 23:47:05.690963 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.695029 kubelet[3522]: E0113 23:47:05.691236 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.695029 kubelet[3522]: W0113 23:47:05.691250 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.695029 kubelet[3522]: E0113 23:47:05.691268 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.695029 kubelet[3522]: E0113 23:47:05.691581 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.695029 kubelet[3522]: W0113 23:47:05.691597 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.695029 kubelet[3522]: E0113 23:47:05.691617 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.695029 kubelet[3522]: E0113 23:47:05.691884 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.695029 kubelet[3522]: W0113 23:47:05.691900 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.698709 kubelet[3522]: E0113 23:47:05.691919 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.698709 kubelet[3522]: E0113 23:47:05.692267 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.698709 kubelet[3522]: W0113 23:47:05.692286 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.698709 kubelet[3522]: E0113 23:47:05.692308 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.718727 kubelet[3522]: E0113 23:47:05.717859 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.718727 kubelet[3522]: W0113 23:47:05.718613 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.718727 kubelet[3522]: E0113 23:47:05.718660 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.721707 kubelet[3522]: I0113 23:47:05.720314 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbgsf\" (UniqueName: \"kubernetes.io/projected/62188975-46ce-424e-8434-9b05ea3b2915-kube-api-access-wbgsf\") pod \"csi-node-driver-t67lh\" (UID: \"62188975-46ce-424e-8434-9b05ea3b2915\") " pod="calico-system/csi-node-driver-t67lh" Jan 13 23:47:05.721707 kubelet[3522]: E0113 23:47:05.721476 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.722347 kubelet[3522]: W0113 23:47:05.722264 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.723114 kubelet[3522]: E0113 23:47:05.722694 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.726847 kubelet[3522]: E0113 23:47:05.726589 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.727889 kubelet[3522]: W0113 23:47:05.727053 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.727889 kubelet[3522]: E0113 23:47:05.727093 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.729973 kubelet[3522]: E0113 23:47:05.729701 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.729973 kubelet[3522]: W0113 23:47:05.729737 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.729973 kubelet[3522]: E0113 23:47:05.729768 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.730491 kubelet[3522]: I0113 23:47:05.729826 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62188975-46ce-424e-8434-9b05ea3b2915-kubelet-dir\") pod \"csi-node-driver-t67lh\" (UID: \"62188975-46ce-424e-8434-9b05ea3b2915\") " pod="calico-system/csi-node-driver-t67lh" Jan 13 23:47:05.732795 kubelet[3522]: E0113 23:47:05.732739 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.733177 kubelet[3522]: W0113 23:47:05.732945 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.733177 kubelet[3522]: E0113 23:47:05.732994 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.736879 kubelet[3522]: E0113 23:47:05.736669 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.736879 kubelet[3522]: W0113 23:47:05.736709 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.736879 kubelet[3522]: E0113 23:47:05.736744 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.738335 kubelet[3522]: E0113 23:47:05.738202 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.738335 kubelet[3522]: W0113 23:47:05.738240 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.738335 kubelet[3522]: E0113 23:47:05.738272 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.738335 kubelet[3522]: I0113 23:47:05.738326 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/62188975-46ce-424e-8434-9b05ea3b2915-socket-dir\") pod \"csi-node-driver-t67lh\" (UID: \"62188975-46ce-424e-8434-9b05ea3b2915\") " pod="calico-system/csi-node-driver-t67lh" Jan 13 23:47:05.740251 kubelet[3522]: E0113 23:47:05.740203 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.740251 kubelet[3522]: W0113 23:47:05.740245 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.740687 kubelet[3522]: E0113 23:47:05.740279 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.740687 kubelet[3522]: I0113 23:47:05.740336 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/62188975-46ce-424e-8434-9b05ea3b2915-varrun\") pod \"csi-node-driver-t67lh\" (UID: \"62188975-46ce-424e-8434-9b05ea3b2915\") " pod="calico-system/csi-node-driver-t67lh" Jan 13 23:47:05.742858 kubelet[3522]: E0113 23:47:05.742558 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.742858 kubelet[3522]: W0113 23:47:05.742605 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.742858 kubelet[3522]: E0113 23:47:05.742640 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.744823 kubelet[3522]: E0113 23:47:05.744717 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.745096 kubelet[3522]: W0113 23:47:05.745066 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.745254 kubelet[3522]: E0113 23:47:05.745228 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.747826 kubelet[3522]: E0113 23:47:05.747792 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.748557 kubelet[3522]: W0113 23:47:05.747966 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.748557 kubelet[3522]: E0113 23:47:05.748027 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.748557 kubelet[3522]: I0113 23:47:05.748112 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/62188975-46ce-424e-8434-9b05ea3b2915-registration-dir\") pod \"csi-node-driver-t67lh\" (UID: \"62188975-46ce-424e-8434-9b05ea3b2915\") " pod="calico-system/csi-node-driver-t67lh" Jan 13 23:47:05.750345 kubelet[3522]: E0113 23:47:05.750164 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.750345 kubelet[3522]: W0113 23:47:05.750203 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.750345 kubelet[3522]: E0113 23:47:05.750236 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.756645 kubelet[3522]: E0113 23:47:05.756604 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.757081 kubelet[3522]: W0113 23:47:05.756825 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.757081 kubelet[3522]: E0113 23:47:05.756867 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.759775 kubelet[3522]: E0113 23:47:05.759740 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.760298 kubelet[3522]: W0113 23:47:05.759925 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.760494 kubelet[3522]: E0113 23:47:05.760456 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.763772 kubelet[3522]: E0113 23:47:05.763734 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.764294 kubelet[3522]: W0113 23:47:05.763966 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.764294 kubelet[3522]: E0113 23:47:05.764008 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.765033 systemd[1]: Started cri-containerd-216a45dcfab4d0df18fa0a208b3c956821579c1aa34ff15295868f100756c6e9.scope - libcontainer container 216a45dcfab4d0df18fa0a208b3c956821579c1aa34ff15295868f100756c6e9. Jan 13 23:47:05.767802 containerd[1984]: time="2026-01-13T23:47:05.767408859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j5vz9,Uid:195a8f42-dbfa-4e92-8f4c-d6680558b5ad,Namespace:calico-system,Attempt:0,}" Jan 13 23:47:05.835991 containerd[1984]: time="2026-01-13T23:47:05.835873467Z" level=info msg="connecting to shim f990c9d4e1e2d1dd1c181f79a1fb5b67516c23e19625cc24f3dafaf5860b4c23" address="unix:///run/containerd/s/77576e513c925ff9c2cba6c4aaccdba1eaebf9eb2accc08ac977b1238cb6f237" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:47:05.850715 kubelet[3522]: E0113 23:47:05.850662 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.851094 kubelet[3522]: W0113 23:47:05.850705 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.851289 kubelet[3522]: E0113 23:47:05.851103 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.853193 kubelet[3522]: E0113 23:47:05.852977 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.853193 kubelet[3522]: W0113 23:47:05.853049 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.853193 kubelet[3522]: E0113 23:47:05.853117 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.854022 kubelet[3522]: E0113 23:47:05.853823 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.854022 kubelet[3522]: W0113 23:47:05.853859 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.854022 kubelet[3522]: E0113 23:47:05.853924 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.855395 kubelet[3522]: E0113 23:47:05.855070 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.855395 kubelet[3522]: W0113 23:47:05.855237 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.855395 kubelet[3522]: E0113 23:47:05.855397 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.856141 kubelet[3522]: E0113 23:47:05.856065 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.856141 kubelet[3522]: W0113 23:47:05.856100 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.856440 kubelet[3522]: E0113 23:47:05.856155 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.856922 kubelet[3522]: E0113 23:47:05.856661 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.856922 kubelet[3522]: W0113 23:47:05.856716 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.856922 kubelet[3522]: E0113 23:47:05.856742 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.857993 kubelet[3522]: E0113 23:47:05.857261 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.857993 kubelet[3522]: W0113 23:47:05.857293 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.857993 kubelet[3522]: E0113 23:47:05.857344 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.857993 kubelet[3522]: E0113 23:47:05.857718 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.857993 kubelet[3522]: W0113 23:47:05.857738 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.857993 kubelet[3522]: E0113 23:47:05.857760 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.858397 kubelet[3522]: E0113 23:47:05.858095 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.858397 kubelet[3522]: W0113 23:47:05.858112 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.858397 kubelet[3522]: E0113 23:47:05.858133 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.858598 kubelet[3522]: E0113 23:47:05.858414 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.858598 kubelet[3522]: W0113 23:47:05.858429 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.858598 kubelet[3522]: E0113 23:47:05.858449 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.860355 kubelet[3522]: E0113 23:47:05.858778 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.860355 kubelet[3522]: W0113 23:47:05.858805 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.860355 kubelet[3522]: E0113 23:47:05.858828 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.860355 kubelet[3522]: E0113 23:47:05.859167 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.860355 kubelet[3522]: W0113 23:47:05.859185 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.860355 kubelet[3522]: E0113 23:47:05.859207 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.860355 kubelet[3522]: E0113 23:47:05.859475 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.860355 kubelet[3522]: W0113 23:47:05.859491 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.860355 kubelet[3522]: E0113 23:47:05.859541 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.860355 kubelet[3522]: E0113 23:47:05.859823 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.861350 kubelet[3522]: W0113 23:47:05.859839 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.861350 kubelet[3522]: E0113 23:47:05.859858 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.861350 kubelet[3522]: E0113 23:47:05.860270 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.861350 kubelet[3522]: W0113 23:47:05.860295 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.861350 kubelet[3522]: E0113 23:47:05.860321 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.864534 kubelet[3522]: E0113 23:47:05.863642 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.865966 kubelet[3522]: W0113 23:47:05.865557 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.865966 kubelet[3522]: E0113 23:47:05.865609 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.866614 kubelet[3522]: E0113 23:47:05.866277 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.866614 kubelet[3522]: W0113 23:47:05.866305 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.866614 kubelet[3522]: E0113 23:47:05.866339 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.867731 kubelet[3522]: E0113 23:47:05.867634 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.867731 kubelet[3522]: W0113 23:47:05.867667 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.867731 kubelet[3522]: E0113 23:47:05.867697 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.872041 kubelet[3522]: E0113 23:47:05.870126 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.872041 kubelet[3522]: W0113 23:47:05.870160 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.872041 kubelet[3522]: E0113 23:47:05.870191 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.872540 kubelet[3522]: E0113 23:47:05.872404 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.872540 kubelet[3522]: W0113 23:47:05.872438 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.872540 kubelet[3522]: E0113 23:47:05.872469 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.873424 kubelet[3522]: E0113 23:47:05.873328 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.873424 kubelet[3522]: W0113 23:47:05.873359 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.873424 kubelet[3522]: E0113 23:47:05.873390 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.874268 kubelet[3522]: E0113 23:47:05.874236 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.874496 kubelet[3522]: W0113 23:47:05.874405 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.874496 kubelet[3522]: E0113 23:47:05.874461 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.875457 kubelet[3522]: E0113 23:47:05.875423 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.876586 kubelet[3522]: W0113 23:47:05.875667 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.876586 kubelet[3522]: E0113 23:47:05.875707 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.877406 kubelet[3522]: E0113 23:47:05.877076 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.877406 kubelet[3522]: W0113 23:47:05.877110 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.877406 kubelet[3522]: E0113 23:47:05.877160 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.879364 kubelet[3522]: E0113 23:47:05.877968 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.879813 kubelet[3522]: W0113 23:47:05.879694 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.879813 kubelet[3522]: E0113 23:47:05.879751 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.890000 audit: BPF prog-id=158 op=LOAD Jan 13 23:47:05.893635 kernel: kauditd_printk_skb: 26 callbacks suppressed Jan 13 23:47:05.893771 kernel: audit: type=1334 audit(1768348025.890:558): prog-id=158 op=LOAD Jan 13 23:47:05.895000 audit: BPF prog-id=159 op=LOAD Jan 13 23:47:05.898895 kernel: audit: type=1334 audit(1768348025.895:559): prog-id=159 op=LOAD Jan 13 23:47:05.895000 audit[3954]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=3942 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:05.907684 kernel: audit: type=1300 audit(1768348025.895:559): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=3942 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:05.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231366134356463666162346430646631386661306132303862336339 Jan 13 23:47:05.919698 kernel: audit: type=1327 audit(1768348025.895:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231366134356463666162346430646631386661306132303862336339 Jan 13 23:47:05.901000 audit: BPF prog-id=159 op=UNLOAD Jan 13 23:47:05.926461 kernel: audit: type=1334 audit(1768348025.901:560): prog-id=159 op=UNLOAD Jan 13 23:47:05.927755 kernel: audit: type=1300 audit(1768348025.901:560): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3942 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:05.901000 audit[3954]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3942 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:05.935219 kubelet[3522]: E0113 23:47:05.935165 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:05.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231366134356463666162346430646631386661306132303862336339 Jan 13 23:47:05.935648 kubelet[3522]: W0113 23:47:05.935618 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:05.935851 kubelet[3522]: E0113 23:47:05.935826 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:05.941626 kernel: audit: type=1327 audit(1768348025.901:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231366134356463666162346430646631386661306132303862336339 Jan 13 23:47:05.902000 audit: BPF prog-id=160 op=LOAD Jan 13 23:47:05.947748 kernel: audit: type=1334 audit(1768348025.902:561): prog-id=160 op=LOAD Jan 13 23:47:05.902000 audit[3954]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=3942 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:05.955660 kernel: audit: type=1300 audit(1768348025.902:561): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=3942 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:05.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231366134356463666162346430646631386661306132303862336339 Jan 13 23:47:05.962629 kernel: audit: type=1327 audit(1768348025.902:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231366134356463666162346430646631386661306132303862336339 Jan 13 23:47:05.922000 audit: BPF prog-id=161 op=LOAD Jan 13 23:47:05.922000 audit[3954]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=3942 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:05.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231366134356463666162346430646631386661306132303862336339 Jan 13 23:47:05.922000 audit: BPF prog-id=161 op=UNLOAD Jan 13 23:47:05.922000 audit[3954]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3942 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:05.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231366134356463666162346430646631386661306132303862336339 Jan 13 23:47:05.922000 audit: BPF prog-id=160 op=UNLOAD Jan 13 23:47:05.922000 audit[3954]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3942 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:05.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231366134356463666162346430646631386661306132303862336339 Jan 13 23:47:05.922000 audit: BPF prog-id=162 op=LOAD Jan 13 23:47:05.922000 audit[3954]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=3942 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:05.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231366134356463666162346430646631386661306132303862336339 Jan 13 23:47:05.977854 systemd[1]: Started cri-containerd-f990c9d4e1e2d1dd1c181f79a1fb5b67516c23e19625cc24f3dafaf5860b4c23.scope - libcontainer container f990c9d4e1e2d1dd1c181f79a1fb5b67516c23e19625cc24f3dafaf5860b4c23. Jan 13 23:47:06.028005 containerd[1984]: time="2026-01-13T23:47:06.027922272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c6f6c8587-jc7gj,Uid:0862e935-3f08-4c78-9c4b-7ca1424a8182,Namespace:calico-system,Attempt:0,} returns sandbox id \"216a45dcfab4d0df18fa0a208b3c956821579c1aa34ff15295868f100756c6e9\"" Jan 13 23:47:06.029000 audit: BPF prog-id=163 op=LOAD Jan 13 23:47:06.032155 containerd[1984]: time="2026-01-13T23:47:06.031227912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 13 23:47:06.031000 audit: BPF prog-id=164 op=LOAD Jan 13 23:47:06.031000 audit[4055]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4025 pid=4055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:06.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639393063396434653165326431646431633138316637396131666235 Jan 13 23:47:06.031000 audit: BPF prog-id=164 op=UNLOAD Jan 13 23:47:06.031000 audit[4055]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4025 pid=4055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:06.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639393063396434653165326431646431633138316637396131666235 Jan 13 23:47:06.031000 audit: BPF prog-id=165 op=LOAD Jan 13 23:47:06.031000 audit[4055]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4025 pid=4055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:06.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639393063396434653165326431646431633138316637396131666235 Jan 13 23:47:06.032000 audit: BPF prog-id=166 op=LOAD Jan 13 23:47:06.032000 audit[4055]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4025 pid=4055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:06.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639393063396434653165326431646431633138316637396131666235 Jan 13 23:47:06.032000 audit: BPF prog-id=166 op=UNLOAD Jan 13 23:47:06.032000 audit[4055]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4025 pid=4055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:06.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639393063396434653165326431646431633138316637396131666235 Jan 13 23:47:06.032000 audit: BPF prog-id=165 op=UNLOAD Jan 13 23:47:06.032000 audit[4055]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4025 pid=4055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:06.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639393063396434653165326431646431633138316637396131666235 Jan 13 23:47:06.032000 audit: BPF prog-id=167 op=LOAD Jan 13 23:47:06.032000 audit[4055]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4025 pid=4055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:06.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639393063396434653165326431646431633138316637396131666235 Jan 13 23:47:06.075315 containerd[1984]: time="2026-01-13T23:47:06.075257700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j5vz9,Uid:195a8f42-dbfa-4e92-8f4c-d6680558b5ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"f990c9d4e1e2d1dd1c181f79a1fb5b67516c23e19625cc24f3dafaf5860b4c23\"" Jan 13 23:47:06.197000 audit[4098]: NETFILTER_CFG table=filter:121 family=2 entries=22 op=nft_register_rule pid=4098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:06.197000 audit[4098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff11b96e0 a2=0 a3=1 items=0 ppid=3635 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:06.197000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:06.202000 audit[4098]: NETFILTER_CFG table=nat:122 family=2 entries=12 op=nft_register_rule pid=4098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:06.202000 audit[4098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff11b96e0 a2=0 a3=1 items=0 ppid=3635 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:06.202000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:06.895570 kubelet[3522]: E0113 23:47:06.895380 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:47:07.378128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796378024.mount: Deactivated successfully. Jan 13 23:47:08.498419 containerd[1984]: time="2026-01-13T23:47:08.498341116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:47:08.501686 containerd[1984]: time="2026-01-13T23:47:08.501597832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 13 23:47:08.503160 containerd[1984]: time="2026-01-13T23:47:08.503099872Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:47:08.508219 containerd[1984]: time="2026-01-13T23:47:08.508150168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:47:08.510699 containerd[1984]: time="2026-01-13T23:47:08.510486712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.479208652s" Jan 13 23:47:08.510699 containerd[1984]: time="2026-01-13T23:47:08.510562840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 13 23:47:08.512525 containerd[1984]: time="2026-01-13T23:47:08.512228896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 13 23:47:08.539977 containerd[1984]: time="2026-01-13T23:47:08.539919664Z" level=info msg="CreateContainer within sandbox \"216a45dcfab4d0df18fa0a208b3c956821579c1aa34ff15295868f100756c6e9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 23:47:08.555534 containerd[1984]: time="2026-01-13T23:47:08.552112504Z" level=info msg="Container 788b566b96cd9d3eb1c07cd55438081e917c6fc7bb7ffbb7c85f75e4ef933244: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:47:08.572615 containerd[1984]: time="2026-01-13T23:47:08.572327441Z" level=info msg="CreateContainer within sandbox \"216a45dcfab4d0df18fa0a208b3c956821579c1aa34ff15295868f100756c6e9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"788b566b96cd9d3eb1c07cd55438081e917c6fc7bb7ffbb7c85f75e4ef933244\"" Jan 13 23:47:08.575548 containerd[1984]: time="2026-01-13T23:47:08.574998953Z" level=info msg="StartContainer for \"788b566b96cd9d3eb1c07cd55438081e917c6fc7bb7ffbb7c85f75e4ef933244\"" Jan 13 23:47:08.583635 containerd[1984]: time="2026-01-13T23:47:08.583565993Z" level=info msg="connecting to shim 788b566b96cd9d3eb1c07cd55438081e917c6fc7bb7ffbb7c85f75e4ef933244" address="unix:///run/containerd/s/ff3c8c64e7b7a3ae9f2592042106d490b1064708b0bde8dba4205280858a777f" protocol=ttrpc version=3 Jan 13 23:47:08.627843 systemd[1]: Started cri-containerd-788b566b96cd9d3eb1c07cd55438081e917c6fc7bb7ffbb7c85f75e4ef933244.scope - libcontainer container 788b566b96cd9d3eb1c07cd55438081e917c6fc7bb7ffbb7c85f75e4ef933244. Jan 13 23:47:08.653000 audit: BPF prog-id=168 op=LOAD Jan 13 23:47:08.654000 audit: BPF prog-id=169 op=LOAD Jan 13 23:47:08.654000 audit[4112]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3942 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:08.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738386235363662393663643964336562316330376364353534333830 Jan 13 23:47:08.654000 audit: BPF prog-id=169 op=UNLOAD Jan 13 23:47:08.654000 audit[4112]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3942 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:08.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738386235363662393663643964336562316330376364353534333830 Jan 13 23:47:08.655000 audit: BPF prog-id=170 op=LOAD Jan 13 23:47:08.655000 audit[4112]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3942 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:08.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738386235363662393663643964336562316330376364353534333830 Jan 13 23:47:08.655000 audit: BPF prog-id=171 op=LOAD Jan 13 23:47:08.655000 audit[4112]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3942 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:08.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738386235363662393663643964336562316330376364353534333830 Jan 13 23:47:08.655000 audit: BPF prog-id=171 op=UNLOAD Jan 13 23:47:08.655000 audit[4112]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3942 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:08.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738386235363662393663643964336562316330376364353534333830 Jan 13 23:47:08.655000 audit: BPF prog-id=170 op=UNLOAD Jan 13 23:47:08.655000 audit[4112]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3942 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:08.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738386235363662393663643964336562316330376364353534333830 Jan 13 23:47:08.655000 audit: BPF prog-id=172 op=LOAD Jan 13 23:47:08.655000 audit[4112]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3942 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:08.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738386235363662393663643964336562316330376364353534333830 Jan 13 23:47:08.718732 containerd[1984]: time="2026-01-13T23:47:08.718665113Z" level=info msg="StartContainer for \"788b566b96cd9d3eb1c07cd55438081e917c6fc7bb7ffbb7c85f75e4ef933244\" returns successfully" Jan 13 23:47:08.897538 kubelet[3522]: E0113 23:47:08.896119 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:47:09.113498 kubelet[3522]: E0113 23:47:09.113367 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.113498 kubelet[3522]: W0113 23:47:09.113425 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.113498 kubelet[3522]: E0113 23:47:09.113459 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.114522 kubelet[3522]: E0113 23:47:09.114395 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.115643 kubelet[3522]: W0113 23:47:09.114448 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.115981 kubelet[3522]: E0113 23:47:09.115825 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.116524 kubelet[3522]: E0113 23:47:09.116452 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.116821 kubelet[3522]: W0113 23:47:09.116496 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.116821 kubelet[3522]: E0113 23:47:09.116750 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.117529 kubelet[3522]: E0113 23:47:09.117450 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.117774 kubelet[3522]: W0113 23:47:09.117480 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.117774 kubelet[3522]: E0113 23:47:09.117718 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.118387 kubelet[3522]: E0113 23:47:09.118326 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.118640 kubelet[3522]: W0113 23:47:09.118355 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.118640 kubelet[3522]: E0113 23:47:09.118585 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.119330 kubelet[3522]: E0113 23:47:09.119211 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.119330 kubelet[3522]: W0113 23:47:09.119263 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.119330 kubelet[3522]: E0113 23:47:09.119292 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.120184 kubelet[3522]: E0113 23:47:09.119981 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.120184 kubelet[3522]: W0113 23:47:09.120007 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.120184 kubelet[3522]: E0113 23:47:09.120032 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.120710 kubelet[3522]: E0113 23:47:09.120629 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.120710 kubelet[3522]: W0113 23:47:09.120654 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.120710 kubelet[3522]: E0113 23:47:09.120679 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.121440 kubelet[3522]: E0113 23:47:09.121394 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.121716 kubelet[3522]: W0113 23:47:09.121606 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.121716 kubelet[3522]: E0113 23:47:09.121644 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.122357 kubelet[3522]: E0113 23:47:09.122166 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.122357 kubelet[3522]: W0113 23:47:09.122190 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.122357 kubelet[3522]: E0113 23:47:09.122214 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.123026 kubelet[3522]: E0113 23:47:09.122817 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.123026 kubelet[3522]: W0113 23:47:09.122846 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.123026 kubelet[3522]: E0113 23:47:09.122872 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.123413 kubelet[3522]: E0113 23:47:09.123390 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.123828 kubelet[3522]: W0113 23:47:09.123617 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.123828 kubelet[3522]: E0113 23:47:09.123651 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.124196 kubelet[3522]: E0113 23:47:09.124172 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.124473 kubelet[3522]: W0113 23:47:09.124287 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.124473 kubelet[3522]: E0113 23:47:09.124321 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.124842 kubelet[3522]: E0113 23:47:09.124818 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.125126 kubelet[3522]: W0113 23:47:09.124932 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.125126 kubelet[3522]: E0113 23:47:09.124961 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.125572 kubelet[3522]: E0113 23:47:09.125465 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.125792 kubelet[3522]: W0113 23:47:09.125491 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.125792 kubelet[3522]: E0113 23:47:09.125688 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.149038 kubelet[3522]: I0113 23:47:09.148658 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c6f6c8587-jc7gj" podStartSLOduration=1.667792227 podStartE2EDuration="4.148635951s" podCreationTimestamp="2026-01-13 23:47:05 +0000 UTC" firstStartedPulling="2026-01-13 23:47:06.030849672 +0000 UTC m=+32.438558646" lastFinishedPulling="2026-01-13 23:47:08.511693396 +0000 UTC m=+34.919402370" observedRunningTime="2026-01-13 23:47:09.148586763 +0000 UTC m=+35.556295749" watchObservedRunningTime="2026-01-13 23:47:09.148635951 +0000 UTC m=+35.556345021" Jan 13 23:47:09.195936 kubelet[3522]: E0113 23:47:09.195862 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.195936 kubelet[3522]: W0113 23:47:09.195989 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.195936 kubelet[3522]: E0113 23:47:09.196025 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.197469 kubelet[3522]: E0113 23:47:09.197377 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.197469 kubelet[3522]: W0113 23:47:09.197410 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.197469 kubelet[3522]: E0113 23:47:09.197439 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.199251 kubelet[3522]: E0113 23:47:09.199169 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.199845 kubelet[3522]: W0113 23:47:09.199205 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.199845 kubelet[3522]: E0113 23:47:09.199630 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.201039 kubelet[3522]: E0113 23:47:09.200965 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.201039 kubelet[3522]: W0113 23:47:09.201002 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.201591 kubelet[3522]: E0113 23:47:09.201437 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.203012 kubelet[3522]: E0113 23:47:09.202865 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.203527 kubelet[3522]: W0113 23:47:09.203288 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.203527 kubelet[3522]: E0113 23:47:09.203341 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.204901 kubelet[3522]: E0113 23:47:09.204701 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.205185 kubelet[3522]: W0113 23:47:09.204737 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.205185 kubelet[3522]: E0113 23:47:09.205076 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.205904 kubelet[3522]: E0113 23:47:09.205807 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.205904 kubelet[3522]: W0113 23:47:09.205860 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.205904 kubelet[3522]: E0113 23:47:09.205888 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.206766 kubelet[3522]: E0113 23:47:09.206352 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.206766 kubelet[3522]: W0113 23:47:09.206386 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.206766 kubelet[3522]: E0113 23:47:09.206444 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.206996 kubelet[3522]: E0113 23:47:09.206890 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.206996 kubelet[3522]: W0113 23:47:09.206910 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.206996 kubelet[3522]: E0113 23:47:09.206936 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.208124 kubelet[3522]: E0113 23:47:09.208002 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.208124 kubelet[3522]: W0113 23:47:09.208039 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.208124 kubelet[3522]: E0113 23:47:09.208071 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.209148 kubelet[3522]: E0113 23:47:09.208632 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.209148 kubelet[3522]: W0113 23:47:09.208668 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.209148 kubelet[3522]: E0113 23:47:09.208700 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.210335 kubelet[3522]: E0113 23:47:09.209914 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.210627 kubelet[3522]: W0113 23:47:09.210486 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.210627 kubelet[3522]: E0113 23:47:09.210596 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.211269 kubelet[3522]: E0113 23:47:09.211195 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.211269 kubelet[3522]: W0113 23:47:09.211220 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.211269 kubelet[3522]: E0113 23:47:09.211243 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.212244 kubelet[3522]: E0113 23:47:09.211955 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.212244 kubelet[3522]: W0113 23:47:09.211979 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.212244 kubelet[3522]: E0113 23:47:09.212002 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.212967 kubelet[3522]: E0113 23:47:09.212764 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.212967 kubelet[3522]: W0113 23:47:09.212789 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.212967 kubelet[3522]: E0113 23:47:09.212813 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.213536 kubelet[3522]: E0113 23:47:09.213426 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.213938 kubelet[3522]: W0113 23:47:09.213627 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.213938 kubelet[3522]: E0113 23:47:09.213661 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.214265 kubelet[3522]: E0113 23:47:09.214243 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.214520 kubelet[3522]: W0113 23:47:09.214360 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.214640 kubelet[3522]: E0113 23:47:09.214617 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.215575 kubelet[3522]: E0113 23:47:09.215473 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:47:09.215759 kubelet[3522]: W0113 23:47:09.215676 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:47:09.215759 kubelet[3522]: E0113 23:47:09.215710 3522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:47:09.808481 containerd[1984]: time="2026-01-13T23:47:09.808392631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:47:09.810009 containerd[1984]: time="2026-01-13T23:47:09.809917387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4262566" Jan 13 23:47:09.811900 containerd[1984]: time="2026-01-13T23:47:09.811780279Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:47:09.815902 containerd[1984]: time="2026-01-13T23:47:09.815822407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:47:09.817872 containerd[1984]: time="2026-01-13T23:47:09.817141711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.304810671s" Jan 13 23:47:09.817872 containerd[1984]: time="2026-01-13T23:47:09.817224955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 13 23:47:09.825970 containerd[1984]: time="2026-01-13T23:47:09.825908947Z" level=info msg="CreateContainer within sandbox \"f990c9d4e1e2d1dd1c181f79a1fb5b67516c23e19625cc24f3dafaf5860b4c23\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 23:47:09.841536 containerd[1984]: time="2026-01-13T23:47:09.841057027Z" level=info msg="Container 9a4d53ce29b6e819649bd7ee37bf5186b97e8345ee08093114a775a272e8549a: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:47:09.852672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3510076951.mount: Deactivated successfully. Jan 13 23:47:09.864808 containerd[1984]: time="2026-01-13T23:47:09.864235543Z" level=info msg="CreateContainer within sandbox \"f990c9d4e1e2d1dd1c181f79a1fb5b67516c23e19625cc24f3dafaf5860b4c23\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9a4d53ce29b6e819649bd7ee37bf5186b97e8345ee08093114a775a272e8549a\"" Jan 13 23:47:09.868386 containerd[1984]: time="2026-01-13T23:47:09.868324627Z" level=info msg="StartContainer for \"9a4d53ce29b6e819649bd7ee37bf5186b97e8345ee08093114a775a272e8549a\"" Jan 13 23:47:09.872731 containerd[1984]: time="2026-01-13T23:47:09.872664943Z" level=info msg="connecting to shim 9a4d53ce29b6e819649bd7ee37bf5186b97e8345ee08093114a775a272e8549a" address="unix:///run/containerd/s/77576e513c925ff9c2cba6c4aaccdba1eaebf9eb2accc08ac977b1238cb6f237" protocol=ttrpc version=3 Jan 13 23:47:09.914877 systemd[1]: Started cri-containerd-9a4d53ce29b6e819649bd7ee37bf5186b97e8345ee08093114a775a272e8549a.scope - libcontainer container 9a4d53ce29b6e819649bd7ee37bf5186b97e8345ee08093114a775a272e8549a. Jan 13 23:47:10.007000 audit: BPF prog-id=173 op=LOAD Jan 13 23:47:10.007000 audit[4187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4025 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:10.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346435336365323962366538313936343962643765653337626635 Jan 13 23:47:10.007000 audit: BPF prog-id=174 op=LOAD Jan 13 23:47:10.007000 audit[4187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4025 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:10.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346435336365323962366538313936343962643765653337626635 Jan 13 23:47:10.007000 audit: BPF prog-id=174 op=UNLOAD Jan 13 23:47:10.007000 audit[4187]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4025 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:10.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346435336365323962366538313936343962643765653337626635 Jan 13 23:47:10.007000 audit: BPF prog-id=173 op=UNLOAD Jan 13 23:47:10.007000 audit[4187]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4025 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:10.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346435336365323962366538313936343962643765653337626635 Jan 13 23:47:10.007000 audit: BPF prog-id=175 op=LOAD Jan 13 23:47:10.007000 audit[4187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4025 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:10.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346435336365323962366538313936343962643765653337626635 Jan 13 23:47:10.050922 containerd[1984]: time="2026-01-13T23:47:10.050849704Z" level=info msg="StartContainer for \"9a4d53ce29b6e819649bd7ee37bf5186b97e8345ee08093114a775a272e8549a\" returns successfully" Jan 13 23:47:10.083693 systemd[1]: cri-containerd-9a4d53ce29b6e819649bd7ee37bf5186b97e8345ee08093114a775a272e8549a.scope: Deactivated successfully. Jan 13 23:47:10.087000 audit: BPF prog-id=175 op=UNLOAD Jan 13 23:47:10.096883 containerd[1984]: time="2026-01-13T23:47:10.096801940Z" level=info msg="received container exit event container_id:\"9a4d53ce29b6e819649bd7ee37bf5186b97e8345ee08093114a775a272e8549a\" id:\"9a4d53ce29b6e819649bd7ee37bf5186b97e8345ee08093114a775a272e8549a\" pid:4200 exited_at:{seconds:1768348030 nanos:95935876}" Jan 13 23:47:10.189391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a4d53ce29b6e819649bd7ee37bf5186b97e8345ee08093114a775a272e8549a-rootfs.mount: Deactivated successfully. Jan 13 23:47:10.241000 audit[4239]: NETFILTER_CFG table=filter:123 family=2 entries=21 op=nft_register_rule pid=4239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:10.241000 audit[4239]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdb586420 a2=0 a3=1 items=0 ppid=3635 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:10.241000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:10.247000 audit[4239]: NETFILTER_CFG table=nat:124 family=2 entries=19 op=nft_register_chain pid=4239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:10.247000 audit[4239]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffdb586420 a2=0 a3=1 items=0 ppid=3635 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:10.247000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:10.896571 kubelet[3522]: E0113 23:47:10.896202 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:47:11.133992 containerd[1984]: time="2026-01-13T23:47:11.129670361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 13 23:47:12.895580 kubelet[3522]: E0113 23:47:12.895458 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:47:14.669851 containerd[1984]: time="2026-01-13T23:47:14.669785003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:47:14.672645 containerd[1984]: time="2026-01-13T23:47:14.671707523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 13 23:47:14.674258 containerd[1984]: time="2026-01-13T23:47:14.674195027Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:47:14.681161 containerd[1984]: time="2026-01-13T23:47:14.681097187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:47:14.683292 containerd[1984]: time="2026-01-13T23:47:14.682498247Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.548896158s" Jan 13 23:47:14.683292 containerd[1984]: time="2026-01-13T23:47:14.682570679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 13 23:47:14.692039 containerd[1984]: time="2026-01-13T23:47:14.691990823Z" level=info msg="CreateContainer within sandbox \"f990c9d4e1e2d1dd1c181f79a1fb5b67516c23e19625cc24f3dafaf5860b4c23\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 23:47:14.712534 containerd[1984]: time="2026-01-13T23:47:14.711225935Z" level=info msg="Container 6079d38abc58bc6130dce7e00dd4d4522ed3ca0b550393fac9c47c66be3bddc4: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:47:14.737982 containerd[1984]: time="2026-01-13T23:47:14.737905439Z" level=info msg="CreateContainer within sandbox \"f990c9d4e1e2d1dd1c181f79a1fb5b67516c23e19625cc24f3dafaf5860b4c23\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6079d38abc58bc6130dce7e00dd4d4522ed3ca0b550393fac9c47c66be3bddc4\"" Jan 13 23:47:14.739851 containerd[1984]: time="2026-01-13T23:47:14.739785479Z" level=info msg="StartContainer for \"6079d38abc58bc6130dce7e00dd4d4522ed3ca0b550393fac9c47c66be3bddc4\"" Jan 13 23:47:14.744908 containerd[1984]: time="2026-01-13T23:47:14.744846107Z" level=info msg="connecting to shim 6079d38abc58bc6130dce7e00dd4d4522ed3ca0b550393fac9c47c66be3bddc4" address="unix:///run/containerd/s/77576e513c925ff9c2cba6c4aaccdba1eaebf9eb2accc08ac977b1238cb6f237" protocol=ttrpc version=3 Jan 13 23:47:14.794962 systemd[1]: Started cri-containerd-6079d38abc58bc6130dce7e00dd4d4522ed3ca0b550393fac9c47c66be3bddc4.scope - libcontainer container 6079d38abc58bc6130dce7e00dd4d4522ed3ca0b550393fac9c47c66be3bddc4. Jan 13 23:47:14.864000 audit: BPF prog-id=176 op=LOAD Jan 13 23:47:14.868117 kernel: kauditd_printk_skb: 84 callbacks suppressed Jan 13 23:47:14.868226 kernel: audit: type=1334 audit(1768348034.864:592): prog-id=176 op=LOAD Jan 13 23:47:14.864000 audit[4249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4025 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:14.883091 kernel: audit: type=1300 audit(1768348034.864:592): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4025 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:14.883223 kernel: audit: type=1327 audit(1768348034.864:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630373964333861626335386263363133306463653765303064643464 Jan 13 23:47:14.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630373964333861626335386263363133306463653765303064643464 Jan 13 23:47:14.864000 audit: BPF prog-id=177 op=LOAD Jan 13 23:47:14.885735 kernel: audit: type=1334 audit(1768348034.864:593): prog-id=177 op=LOAD Jan 13 23:47:14.864000 audit[4249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4025 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:14.892241 kernel: audit: type=1300 audit(1768348034.864:593): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4025 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:14.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630373964333861626335386263363133306463653765303064643464 Jan 13 23:47:14.898418 kernel: audit: type=1327 audit(1768348034.864:593): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630373964333861626335386263363133306463653765303064643464 Jan 13 23:47:14.902146 kubelet[3522]: E0113 23:47:14.899758 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:47:14.905879 kernel: audit: type=1334 audit(1768348034.867:594): prog-id=177 op=UNLOAD Jan 13 23:47:14.906637 kernel: audit: type=1300 audit(1768348034.867:594): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4025 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:14.867000 audit: BPF prog-id=177 op=UNLOAD Jan 13 23:47:14.867000 audit[4249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4025 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:14.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630373964333861626335386263363133306463653765303064643464 Jan 13 23:47:14.920029 kernel: audit: type=1327 audit(1768348034.867:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630373964333861626335386263363133306463653765303064643464 Jan 13 23:47:14.867000 audit: BPF prog-id=176 op=UNLOAD Jan 13 23:47:14.923198 kernel: audit: type=1334 audit(1768348034.867:595): prog-id=176 op=UNLOAD Jan 13 23:47:14.867000 audit[4249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4025 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:14.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630373964333861626335386263363133306463653765303064643464 Jan 13 23:47:14.867000 audit: BPF prog-id=178 op=LOAD Jan 13 23:47:14.867000 audit[4249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4025 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:14.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630373964333861626335386263363133306463653765303064643464 Jan 13 23:47:14.947929 containerd[1984]: time="2026-01-13T23:47:14.947787000Z" level=info msg="StartContainer for \"6079d38abc58bc6130dce7e00dd4d4522ed3ca0b550393fac9c47c66be3bddc4\" returns successfully" Jan 13 23:47:15.934937 containerd[1984]: time="2026-01-13T23:47:15.934751773Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 23:47:15.943433 systemd[1]: cri-containerd-6079d38abc58bc6130dce7e00dd4d4522ed3ca0b550393fac9c47c66be3bddc4.scope: Deactivated successfully. Jan 13 23:47:15.944258 systemd[1]: cri-containerd-6079d38abc58bc6130dce7e00dd4d4522ed3ca0b550393fac9c47c66be3bddc4.scope: Consumed 956ms CPU time, 187.2M memory peak, 165.9M written to disk. Jan 13 23:47:15.947000 audit: BPF prog-id=178 op=UNLOAD Jan 13 23:47:15.952586 containerd[1984]: time="2026-01-13T23:47:15.952525705Z" level=info msg="received container exit event container_id:\"6079d38abc58bc6130dce7e00dd4d4522ed3ca0b550393fac9c47c66be3bddc4\" id:\"6079d38abc58bc6130dce7e00dd4d4522ed3ca0b550393fac9c47c66be3bddc4\" pid:4263 exited_at:{seconds:1768348035 nanos:951964309}" Jan 13 23:47:15.993888 kubelet[3522]: I0113 23:47:15.993627 3522 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 13 23:47:16.001267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6079d38abc58bc6130dce7e00dd4d4522ed3ca0b550393fac9c47c66be3bddc4-rootfs.mount: Deactivated successfully. Jan 13 23:47:16.131027 systemd[1]: Created slice kubepods-burstable-podcc252f99_0335_4f8d_b959_3bfc25386e2c.slice - libcontainer container kubepods-burstable-podcc252f99_0335_4f8d_b959_3bfc25386e2c.slice. Jan 13 23:47:16.245004 systemd[1]: Created slice kubepods-burstable-podfff694d2_51f0_4513_a81f_bedcae1438b9.slice - libcontainer container kubepods-burstable-podfff694d2_51f0_4513_a81f_bedcae1438b9.slice. Jan 13 23:47:16.271946 kubelet[3522]: I0113 23:47:16.266415 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fff694d2-51f0-4513-a81f-bedcae1438b9-config-volume\") pod \"coredns-674b8bbfcf-6f6jh\" (UID: \"fff694d2-51f0-4513-a81f-bedcae1438b9\") " pod="kube-system/coredns-674b8bbfcf-6f6jh" Jan 13 23:47:16.271946 kubelet[3522]: I0113 23:47:16.266590 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9zr9\" (UniqueName: \"kubernetes.io/projected/cc252f99-0335-4f8d-b959-3bfc25386e2c-kube-api-access-x9zr9\") pod \"coredns-674b8bbfcf-tfxlt\" (UID: \"cc252f99-0335-4f8d-b959-3bfc25386e2c\") " pod="kube-system/coredns-674b8bbfcf-tfxlt" Jan 13 23:47:16.271946 kubelet[3522]: I0113 23:47:16.266731 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7szqk\" (UniqueName: \"kubernetes.io/projected/fff694d2-51f0-4513-a81f-bedcae1438b9-kube-api-access-7szqk\") pod \"coredns-674b8bbfcf-6f6jh\" (UID: \"fff694d2-51f0-4513-a81f-bedcae1438b9\") " pod="kube-system/coredns-674b8bbfcf-6f6jh" Jan 13 23:47:16.271946 kubelet[3522]: I0113 23:47:16.267256 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc252f99-0335-4f8d-b959-3bfc25386e2c-config-volume\") pod \"coredns-674b8bbfcf-tfxlt\" (UID: \"cc252f99-0335-4f8d-b959-3bfc25386e2c\") " pod="kube-system/coredns-674b8bbfcf-tfxlt" Jan 13 23:47:16.310856 systemd[1]: Created slice kubepods-besteffort-pod12f7eaa9_9d20_4e8f_9f20_d2118b28d17a.slice - libcontainer container kubepods-besteffort-pod12f7eaa9_9d20_4e8f_9f20_d2118b28d17a.slice. Jan 13 23:47:16.360988 systemd[1]: Created slice kubepods-besteffort-pod38c0b046_9ddd_4e0f_99d2_de8c0748710c.slice - libcontainer container kubepods-besteffort-pod38c0b046_9ddd_4e0f_99d2_de8c0748710c.slice. Jan 13 23:47:16.368837 kubelet[3522]: I0113 23:47:16.368662 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12f7eaa9-9d20-4e8f-9f20-d2118b28d17a-tigera-ca-bundle\") pod \"calico-kube-controllers-86d44f7875-phhfv\" (UID: \"12f7eaa9-9d20-4e8f-9f20-d2118b28d17a\") " pod="calico-system/calico-kube-controllers-86d44f7875-phhfv" Jan 13 23:47:16.369312 kubelet[3522]: I0113 23:47:16.369237 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w26fn\" (UniqueName: \"kubernetes.io/projected/12f7eaa9-9d20-4e8f-9f20-d2118b28d17a-kube-api-access-w26fn\") pod \"calico-kube-controllers-86d44f7875-phhfv\" (UID: \"12f7eaa9-9d20-4e8f-9f20-d2118b28d17a\") " pod="calico-system/calico-kube-controllers-86d44f7875-phhfv" Jan 13 23:47:16.447929 systemd[1]: Created slice kubepods-besteffort-poddb4e6234_4e3a_4385_ad17_4564cf1a27b8.slice - libcontainer container kubepods-besteffort-poddb4e6234_4e3a_4385_ad17_4564cf1a27b8.slice. Jan 13 23:47:16.470203 kubelet[3522]: I0113 23:47:16.470159 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/db4e6234-4e3a-4385-ad17-4564cf1a27b8-calico-apiserver-certs\") pod \"calico-apiserver-5d495df4b7-2bldm\" (UID: \"db4e6234-4e3a-4385-ad17-4564cf1a27b8\") " pod="calico-apiserver/calico-apiserver-5d495df4b7-2bldm" Jan 13 23:47:16.470456 kubelet[3522]: I0113 23:47:16.470430 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9hmj\" (UniqueName: \"kubernetes.io/projected/38c0b046-9ddd-4e0f-99d2-de8c0748710c-kube-api-access-p9hmj\") pod \"calico-apiserver-5d495df4b7-p456r\" (UID: \"38c0b046-9ddd-4e0f-99d2-de8c0748710c\") " pod="calico-apiserver/calico-apiserver-5d495df4b7-p456r" Jan 13 23:47:16.470716 kubelet[3522]: I0113 23:47:16.470691 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/38c0b046-9ddd-4e0f-99d2-de8c0748710c-calico-apiserver-certs\") pod \"calico-apiserver-5d495df4b7-p456r\" (UID: \"38c0b046-9ddd-4e0f-99d2-de8c0748710c\") " pod="calico-apiserver/calico-apiserver-5d495df4b7-p456r" Jan 13 23:47:16.472189 kubelet[3522]: I0113 23:47:16.472147 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ptt2\" (UniqueName: \"kubernetes.io/projected/db4e6234-4e3a-4385-ad17-4564cf1a27b8-kube-api-access-2ptt2\") pod \"calico-apiserver-5d495df4b7-2bldm\" (UID: \"db4e6234-4e3a-4385-ad17-4564cf1a27b8\") " pod="calico-apiserver/calico-apiserver-5d495df4b7-2bldm" Jan 13 23:47:16.475914 containerd[1984]: time="2026-01-13T23:47:16.475824852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tfxlt,Uid:cc252f99-0335-4f8d-b959-3bfc25386e2c,Namespace:kube-system,Attempt:0,}" Jan 13 23:47:16.528017 systemd[1]: Created slice kubepods-besteffort-podf27ddc53_e7e4_41b7_97d9_616c5339cc85.slice - libcontainer container kubepods-besteffort-podf27ddc53_e7e4_41b7_97d9_616c5339cc85.slice. Jan 13 23:47:16.539827 systemd[1]: Created slice kubepods-besteffort-podf5961ca7_c07f_4a8a_924f_01a8914d6008.slice - libcontainer container kubepods-besteffort-podf5961ca7_c07f_4a8a_924f_01a8914d6008.slice. Jan 13 23:47:16.574239 kubelet[3522]: I0113 23:47:16.574008 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f27ddc53-e7e4-41b7-97d9-616c5339cc85-goldmane-key-pair\") pod \"goldmane-666569f655-6fffm\" (UID: \"f27ddc53-e7e4-41b7-97d9-616c5339cc85\") " pod="calico-system/goldmane-666569f655-6fffm" Jan 13 23:47:16.574891 kubelet[3522]: I0113 23:47:16.574470 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5g5s\" (UniqueName: \"kubernetes.io/projected/f27ddc53-e7e4-41b7-97d9-616c5339cc85-kube-api-access-p5g5s\") pod \"goldmane-666569f655-6fffm\" (UID: \"f27ddc53-e7e4-41b7-97d9-616c5339cc85\") " pod="calico-system/goldmane-666569f655-6fffm" Jan 13 23:47:16.575409 kubelet[3522]: I0113 23:47:16.575230 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27ddc53-e7e4-41b7-97d9-616c5339cc85-config\") pod \"goldmane-666569f655-6fffm\" (UID: \"f27ddc53-e7e4-41b7-97d9-616c5339cc85\") " pod="calico-system/goldmane-666569f655-6fffm" Jan 13 23:47:16.576883 kubelet[3522]: I0113 23:47:16.576789 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f27ddc53-e7e4-41b7-97d9-616c5339cc85-goldmane-ca-bundle\") pod \"goldmane-666569f655-6fffm\" (UID: \"f27ddc53-e7e4-41b7-97d9-616c5339cc85\") " pod="calico-system/goldmane-666569f655-6fffm" Jan 13 23:47:16.578496 containerd[1984]: time="2026-01-13T23:47:16.578201796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6f6jh,Uid:fff694d2-51f0-4513-a81f-bedcae1438b9,Namespace:kube-system,Attempt:0,}" Jan 13 23:47:16.637785 containerd[1984]: time="2026-01-13T23:47:16.637649041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d44f7875-phhfv,Uid:12f7eaa9-9d20-4e8f-9f20-d2118b28d17a,Namespace:calico-system,Attempt:0,}" Jan 13 23:47:16.668348 containerd[1984]: time="2026-01-13T23:47:16.668298505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d495df4b7-p456r,Uid:38c0b046-9ddd-4e0f-99d2-de8c0748710c,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:47:16.682083 kubelet[3522]: I0113 23:47:16.678493 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f5961ca7-c07f-4a8a-924f-01a8914d6008-whisker-backend-key-pair\") pod \"whisker-6f99b44944-t75tg\" (UID: \"f5961ca7-c07f-4a8a-924f-01a8914d6008\") " pod="calico-system/whisker-6f99b44944-t75tg" Jan 13 23:47:16.682240 kubelet[3522]: I0113 23:47:16.682129 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5961ca7-c07f-4a8a-924f-01a8914d6008-whisker-ca-bundle\") pod \"whisker-6f99b44944-t75tg\" (UID: \"f5961ca7-c07f-4a8a-924f-01a8914d6008\") " pod="calico-system/whisker-6f99b44944-t75tg" Jan 13 23:47:16.682240 kubelet[3522]: I0113 23:47:16.682178 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ttv5\" (UniqueName: \"kubernetes.io/projected/f5961ca7-c07f-4a8a-924f-01a8914d6008-kube-api-access-6ttv5\") pod \"whisker-6f99b44944-t75tg\" (UID: \"f5961ca7-c07f-4a8a-924f-01a8914d6008\") " pod="calico-system/whisker-6f99b44944-t75tg" Jan 13 23:47:16.777723 containerd[1984]: time="2026-01-13T23:47:16.777557845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d495df4b7-2bldm,Uid:db4e6234-4e3a-4385-ad17-4564cf1a27b8,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:47:16.839444 containerd[1984]: time="2026-01-13T23:47:16.839043782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6fffm,Uid:f27ddc53-e7e4-41b7-97d9-616c5339cc85,Namespace:calico-system,Attempt:0,}" Jan 13 23:47:16.848266 containerd[1984]: time="2026-01-13T23:47:16.847909742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f99b44944-t75tg,Uid:f5961ca7-c07f-4a8a-924f-01a8914d6008,Namespace:calico-system,Attempt:0,}" Jan 13 23:47:16.913462 systemd[1]: Created slice kubepods-besteffort-pod62188975_46ce_424e_8434_9b05ea3b2915.slice - libcontainer container kubepods-besteffort-pod62188975_46ce_424e_8434_9b05ea3b2915.slice. Jan 13 23:47:16.920613 containerd[1984]: time="2026-01-13T23:47:16.920557094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t67lh,Uid:62188975-46ce-424e-8434-9b05ea3b2915,Namespace:calico-system,Attempt:0,}" Jan 13 23:47:17.030529 containerd[1984]: time="2026-01-13T23:47:17.030332939Z" level=error msg="Failed to destroy network for sandbox \"f76e3d538c29b0b13b730094e29eb13aee0d7287f90403fccb6a7323018c3b0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.053624 containerd[1984]: time="2026-01-13T23:47:17.052668779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tfxlt,Uid:cc252f99-0335-4f8d-b959-3bfc25386e2c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76e3d538c29b0b13b730094e29eb13aee0d7287f90403fccb6a7323018c3b0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.056143 kubelet[3522]: E0113 23:47:17.055814 3522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76e3d538c29b0b13b730094e29eb13aee0d7287f90403fccb6a7323018c3b0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.056143 kubelet[3522]: E0113 23:47:17.055907 3522 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76e3d538c29b0b13b730094e29eb13aee0d7287f90403fccb6a7323018c3b0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tfxlt" Jan 13 23:47:17.056143 kubelet[3522]: E0113 23:47:17.055942 3522 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76e3d538c29b0b13b730094e29eb13aee0d7287f90403fccb6a7323018c3b0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tfxlt" Jan 13 23:47:17.057694 kubelet[3522]: E0113 23:47:17.056029 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tfxlt_kube-system(cc252f99-0335-4f8d-b959-3bfc25386e2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tfxlt_kube-system(cc252f99-0335-4f8d-b959-3bfc25386e2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f76e3d538c29b0b13b730094e29eb13aee0d7287f90403fccb6a7323018c3b0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tfxlt" podUID="cc252f99-0335-4f8d-b959-3bfc25386e2c" Jan 13 23:47:17.058176 systemd[1]: run-netns-cni\x2d02fbc52a\x2df1b9\x2dc75a\x2ddb69\x2dfb78b31aefb2.mount: Deactivated successfully. Jan 13 23:47:17.083804 containerd[1984]: time="2026-01-13T23:47:17.083729171Z" level=error msg="Failed to destroy network for sandbox \"9f62e4ad7ab38e37ff71d66234d9414fbf1718deb7a85278239eccfeb3c6f1d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.089926 systemd[1]: run-netns-cni\x2d1b331097\x2d68e3\x2d2c2b\x2dc7cc\x2d3ad1832a00f4.mount: Deactivated successfully. Jan 13 23:47:17.100809 containerd[1984]: time="2026-01-13T23:47:17.100678979Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d44f7875-phhfv,Uid:12f7eaa9-9d20-4e8f-9f20-d2118b28d17a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f62e4ad7ab38e37ff71d66234d9414fbf1718deb7a85278239eccfeb3c6f1d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.101577 kubelet[3522]: E0113 23:47:17.101065 3522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f62e4ad7ab38e37ff71d66234d9414fbf1718deb7a85278239eccfeb3c6f1d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.101577 kubelet[3522]: E0113 23:47:17.101142 3522 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f62e4ad7ab38e37ff71d66234d9414fbf1718deb7a85278239eccfeb3c6f1d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d44f7875-phhfv" Jan 13 23:47:17.101577 kubelet[3522]: E0113 23:47:17.101175 3522 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f62e4ad7ab38e37ff71d66234d9414fbf1718deb7a85278239eccfeb3c6f1d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d44f7875-phhfv" Jan 13 23:47:17.102056 kubelet[3522]: E0113 23:47:17.101266 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d44f7875-phhfv_calico-system(12f7eaa9-9d20-4e8f-9f20-d2118b28d17a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d44f7875-phhfv_calico-system(12f7eaa9-9d20-4e8f-9f20-d2118b28d17a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f62e4ad7ab38e37ff71d66234d9414fbf1718deb7a85278239eccfeb3c6f1d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d44f7875-phhfv" podUID="12f7eaa9-9d20-4e8f-9f20-d2118b28d17a" Jan 13 23:47:17.129711 containerd[1984]: time="2026-01-13T23:47:17.129478379Z" level=error msg="Failed to destroy network for sandbox \"e46f4b5d5a85bcda8836987fdd05b04d8afd29beb6367ed9720b5e5d139840b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.135300 systemd[1]: run-netns-cni\x2d921e5a34\x2d8f47\x2dc230\x2dee22\x2d6031e462f500.mount: Deactivated successfully. Jan 13 23:47:17.142228 containerd[1984]: time="2026-01-13T23:47:17.142131467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6f6jh,Uid:fff694d2-51f0-4513-a81f-bedcae1438b9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e46f4b5d5a85bcda8836987fdd05b04d8afd29beb6367ed9720b5e5d139840b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.143120 kubelet[3522]: E0113 23:47:17.142470 3522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e46f4b5d5a85bcda8836987fdd05b04d8afd29beb6367ed9720b5e5d139840b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.143120 kubelet[3522]: E0113 23:47:17.142567 3522 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e46f4b5d5a85bcda8836987fdd05b04d8afd29beb6367ed9720b5e5d139840b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6f6jh" Jan 13 23:47:17.143120 kubelet[3522]: E0113 23:47:17.142603 3522 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e46f4b5d5a85bcda8836987fdd05b04d8afd29beb6367ed9720b5e5d139840b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6f6jh" Jan 13 23:47:17.143360 kubelet[3522]: E0113 23:47:17.142683 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6f6jh_kube-system(fff694d2-51f0-4513-a81f-bedcae1438b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6f6jh_kube-system(fff694d2-51f0-4513-a81f-bedcae1438b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e46f4b5d5a85bcda8836987fdd05b04d8afd29beb6367ed9720b5e5d139840b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6f6jh" podUID="fff694d2-51f0-4513-a81f-bedcae1438b9" Jan 13 23:47:17.190279 containerd[1984]: time="2026-01-13T23:47:17.189989915Z" level=error msg="Failed to destroy network for sandbox \"4dcc50d2c46961cbd8313cc4fdc10fdfbd410a45ce44190be42f036014694ea8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.197217 systemd[1]: run-netns-cni\x2d426f1e58\x2da49c\x2de531\x2d84f0\x2dbd4dafe184c9.mount: Deactivated successfully. Jan 13 23:47:17.241403 containerd[1984]: time="2026-01-13T23:47:17.241240440Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d495df4b7-p456r,Uid:38c0b046-9ddd-4e0f-99d2-de8c0748710c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dcc50d2c46961cbd8313cc4fdc10fdfbd410a45ce44190be42f036014694ea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.243157 kubelet[3522]: E0113 23:47:17.242112 3522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dcc50d2c46961cbd8313cc4fdc10fdfbd410a45ce44190be42f036014694ea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.243157 kubelet[3522]: E0113 23:47:17.242194 3522 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dcc50d2c46961cbd8313cc4fdc10fdfbd410a45ce44190be42f036014694ea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d495df4b7-p456r" Jan 13 23:47:17.243157 kubelet[3522]: E0113 23:47:17.242238 3522 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dcc50d2c46961cbd8313cc4fdc10fdfbd410a45ce44190be42f036014694ea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d495df4b7-p456r" Jan 13 23:47:17.243414 containerd[1984]: time="2026-01-13T23:47:17.242804976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 13 23:47:17.243480 kubelet[3522]: E0113 23:47:17.242305 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d495df4b7-p456r_calico-apiserver(38c0b046-9ddd-4e0f-99d2-de8c0748710c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d495df4b7-p456r_calico-apiserver(38c0b046-9ddd-4e0f-99d2-de8c0748710c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4dcc50d2c46961cbd8313cc4fdc10fdfbd410a45ce44190be42f036014694ea8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-p456r" podUID="38c0b046-9ddd-4e0f-99d2-de8c0748710c" Jan 13 23:47:17.288938 containerd[1984]: time="2026-01-13T23:47:17.288874056Z" level=error msg="Failed to destroy network for sandbox \"c1a729c2c493d6c8a4ec887b7fe834321ed49cc1bf1a0547e8340091795c3cbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.290429 containerd[1984]: time="2026-01-13T23:47:17.290349516Z" level=error msg="Failed to destroy network for sandbox \"db896896e3fe7d820e06eb9e0a788e183982e696f3495f138c57fa45898faffa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.294098 containerd[1984]: time="2026-01-13T23:47:17.293939316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6fffm,Uid:f27ddc53-e7e4-41b7-97d9-616c5339cc85,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a729c2c493d6c8a4ec887b7fe834321ed49cc1bf1a0547e8340091795c3cbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.294431 kubelet[3522]: E0113 23:47:17.294278 3522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a729c2c493d6c8a4ec887b7fe834321ed49cc1bf1a0547e8340091795c3cbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.294431 kubelet[3522]: E0113 23:47:17.294354 3522 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a729c2c493d6c8a4ec887b7fe834321ed49cc1bf1a0547e8340091795c3cbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-6fffm" Jan 13 23:47:17.294431 kubelet[3522]: E0113 23:47:17.294395 3522 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a729c2c493d6c8a4ec887b7fe834321ed49cc1bf1a0547e8340091795c3cbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-6fffm" Jan 13 23:47:17.295525 kubelet[3522]: E0113 23:47:17.294765 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-6fffm_calico-system(f27ddc53-e7e4-41b7-97d9-616c5339cc85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-6fffm_calico-system(f27ddc53-e7e4-41b7-97d9-616c5339cc85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1a729c2c493d6c8a4ec887b7fe834321ed49cc1bf1a0547e8340091795c3cbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-6fffm" podUID="f27ddc53-e7e4-41b7-97d9-616c5339cc85" Jan 13 23:47:17.298217 containerd[1984]: time="2026-01-13T23:47:17.298115616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f99b44944-t75tg,Uid:f5961ca7-c07f-4a8a-924f-01a8914d6008,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db896896e3fe7d820e06eb9e0a788e183982e696f3495f138c57fa45898faffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.299798 kubelet[3522]: E0113 23:47:17.299187 3522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db896896e3fe7d820e06eb9e0a788e183982e696f3495f138c57fa45898faffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.299798 kubelet[3522]: E0113 23:47:17.299262 3522 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db896896e3fe7d820e06eb9e0a788e183982e696f3495f138c57fa45898faffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f99b44944-t75tg" Jan 13 23:47:17.299798 kubelet[3522]: E0113 23:47:17.299305 3522 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db896896e3fe7d820e06eb9e0a788e183982e696f3495f138c57fa45898faffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f99b44944-t75tg" Jan 13 23:47:17.300556 kubelet[3522]: E0113 23:47:17.299389 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f99b44944-t75tg_calico-system(f5961ca7-c07f-4a8a-924f-01a8914d6008)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f99b44944-t75tg_calico-system(f5961ca7-c07f-4a8a-924f-01a8914d6008)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db896896e3fe7d820e06eb9e0a788e183982e696f3495f138c57fa45898faffa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f99b44944-t75tg" podUID="f5961ca7-c07f-4a8a-924f-01a8914d6008" Jan 13 23:47:17.313473 containerd[1984]: time="2026-01-13T23:47:17.313416588Z" level=error msg="Failed to destroy network for sandbox \"5f57bacd20030a07dca11b088be75e70dc5ddec2094c15fc5bdd7d53e90a5728\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.318297 containerd[1984]: time="2026-01-13T23:47:17.318195156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d495df4b7-2bldm,Uid:db4e6234-4e3a-4385-ad17-4564cf1a27b8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f57bacd20030a07dca11b088be75e70dc5ddec2094c15fc5bdd7d53e90a5728\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.319569 kubelet[3522]: E0113 23:47:17.319457 3522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f57bacd20030a07dca11b088be75e70dc5ddec2094c15fc5bdd7d53e90a5728\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.319711 kubelet[3522]: E0113 23:47:17.319595 3522 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f57bacd20030a07dca11b088be75e70dc5ddec2094c15fc5bdd7d53e90a5728\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d495df4b7-2bldm" Jan 13 23:47:17.319711 kubelet[3522]: E0113 23:47:17.319674 3522 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f57bacd20030a07dca11b088be75e70dc5ddec2094c15fc5bdd7d53e90a5728\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d495df4b7-2bldm" Jan 13 23:47:17.321099 kubelet[3522]: E0113 23:47:17.319790 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d495df4b7-2bldm_calico-apiserver(db4e6234-4e3a-4385-ad17-4564cf1a27b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d495df4b7-2bldm_calico-apiserver(db4e6234-4e3a-4385-ad17-4564cf1a27b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f57bacd20030a07dca11b088be75e70dc5ddec2094c15fc5bdd7d53e90a5728\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-2bldm" podUID="db4e6234-4e3a-4385-ad17-4564cf1a27b8" Jan 13 23:47:17.340107 containerd[1984]: time="2026-01-13T23:47:17.340034160Z" level=error msg="Failed to destroy network for sandbox \"0376e2341744402f42043b6c5b51984b0c3a5e4ed6ab836180eea496e7499e27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.344168 containerd[1984]: time="2026-01-13T23:47:17.343848060Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t67lh,Uid:62188975-46ce-424e-8434-9b05ea3b2915,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0376e2341744402f42043b6c5b51984b0c3a5e4ed6ab836180eea496e7499e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.344329 kubelet[3522]: E0113 23:47:17.344206 3522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0376e2341744402f42043b6c5b51984b0c3a5e4ed6ab836180eea496e7499e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:47:17.344329 kubelet[3522]: E0113 23:47:17.344279 3522 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0376e2341744402f42043b6c5b51984b0c3a5e4ed6ab836180eea496e7499e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t67lh" Jan 13 23:47:17.344329 kubelet[3522]: E0113 23:47:17.344316 3522 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0376e2341744402f42043b6c5b51984b0c3a5e4ed6ab836180eea496e7499e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t67lh" Jan 13 23:47:17.344584 kubelet[3522]: E0113 23:47:17.344389 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t67lh_calico-system(62188975-46ce-424e-8434-9b05ea3b2915)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t67lh_calico-system(62188975-46ce-424e-8434-9b05ea3b2915)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0376e2341744402f42043b6c5b51984b0c3a5e4ed6ab836180eea496e7499e27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:47:18.000729 systemd[1]: run-netns-cni\x2d66558017\x2d411e\x2dadf2\x2d117f\x2d2fa030a39925.mount: Deactivated successfully. Jan 13 23:47:18.000912 systemd[1]: run-netns-cni\x2d104267c7\x2d0bb0\x2d74e3\x2de2e0\x2dcd5dded5d341.mount: Deactivated successfully. Jan 13 23:47:18.001029 systemd[1]: run-netns-cni\x2d7af2f5cd\x2d0104\x2dc316\x2d24f9\x2dfc5df2236eed.mount: Deactivated successfully. Jan 13 23:47:18.001173 systemd[1]: run-netns-cni\x2d4cc82cc5\x2d2c9d\x2ded43\x2d9bc8\x2d6c3c65820c53.mount: Deactivated successfully. Jan 13 23:47:24.734886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2196936593.mount: Deactivated successfully. Jan 13 23:47:24.821752 containerd[1984]: time="2026-01-13T23:47:24.821588721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:47:24.824085 containerd[1984]: time="2026-01-13T23:47:24.824009517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 13 23:47:24.827032 containerd[1984]: time="2026-01-13T23:47:24.826933377Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:47:24.858539 containerd[1984]: time="2026-01-13T23:47:24.857657601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:47:24.860282 containerd[1984]: time="2026-01-13T23:47:24.860215713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 7.617355321s" Jan 13 23:47:24.860548 containerd[1984]: time="2026-01-13T23:47:24.860489565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 13 23:47:24.903221 containerd[1984]: time="2026-01-13T23:47:24.903129262Z" level=info msg="CreateContainer within sandbox \"f990c9d4e1e2d1dd1c181f79a1fb5b67516c23e19625cc24f3dafaf5860b4c23\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 23:47:24.936598 containerd[1984]: time="2026-01-13T23:47:24.933237118Z" level=info msg="Container 9f49d3b8a689612c7ababed69667737be910654ec1cc358121d62b7539f2b22c: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:47:24.947004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2593582818.mount: Deactivated successfully. Jan 13 23:47:24.966996 containerd[1984]: time="2026-01-13T23:47:24.966934882Z" level=info msg="CreateContainer within sandbox \"f990c9d4e1e2d1dd1c181f79a1fb5b67516c23e19625cc24f3dafaf5860b4c23\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9f49d3b8a689612c7ababed69667737be910654ec1cc358121d62b7539f2b22c\"" Jan 13 23:47:24.968431 containerd[1984]: time="2026-01-13T23:47:24.968382682Z" level=info msg="StartContainer for \"9f49d3b8a689612c7ababed69667737be910654ec1cc358121d62b7539f2b22c\"" Jan 13 23:47:24.973089 containerd[1984]: time="2026-01-13T23:47:24.972973318Z" level=info msg="connecting to shim 9f49d3b8a689612c7ababed69667737be910654ec1cc358121d62b7539f2b22c" address="unix:///run/containerd/s/77576e513c925ff9c2cba6c4aaccdba1eaebf9eb2accc08ac977b1238cb6f237" protocol=ttrpc version=3 Jan 13 23:47:25.026942 systemd[1]: Started cri-containerd-9f49d3b8a689612c7ababed69667737be910654ec1cc358121d62b7539f2b22c.scope - libcontainer container 9f49d3b8a689612c7ababed69667737be910654ec1cc358121d62b7539f2b22c. Jan 13 23:47:25.112153 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 13 23:47:25.112436 kernel: audit: type=1334 audit(1768348045.107:598): prog-id=179 op=LOAD Jan 13 23:47:25.107000 audit: BPF prog-id=179 op=LOAD Jan 13 23:47:25.107000 audit[4516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4025 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:25.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966343964336238613638393631326337616261626564363936363737 Jan 13 23:47:25.120644 kernel: audit: type=1300 audit(1768348045.107:598): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4025 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:25.130206 kernel: audit: type=1327 audit(1768348045.107:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966343964336238613638393631326337616261626564363936363737 Jan 13 23:47:25.130335 kernel: audit: type=1334 audit(1768348045.107:599): prog-id=180 op=LOAD Jan 13 23:47:25.107000 audit: BPF prog-id=180 op=LOAD Jan 13 23:47:25.107000 audit[4516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4025 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:25.137017 kernel: audit: type=1300 audit(1768348045.107:599): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4025 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:25.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966343964336238613638393631326337616261626564363936363737 Jan 13 23:47:25.145467 kernel: audit: type=1327 audit(1768348045.107:599): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966343964336238613638393631326337616261626564363936363737 Jan 13 23:47:25.107000 audit: BPF prog-id=180 op=UNLOAD Jan 13 23:47:25.148187 kernel: audit: type=1334 audit(1768348045.107:600): prog-id=180 op=UNLOAD Jan 13 23:47:25.148965 kernel: audit: type=1300 audit(1768348045.107:600): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4025 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:25.107000 audit[4516]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4025 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:25.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966343964336238613638393631326337616261626564363936363737 Jan 13 23:47:25.165648 kernel: audit: type=1327 audit(1768348045.107:600): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966343964336238613638393631326337616261626564363936363737 Jan 13 23:47:25.107000 audit: BPF prog-id=179 op=UNLOAD Jan 13 23:47:25.172537 kernel: audit: type=1334 audit(1768348045.107:601): prog-id=179 op=UNLOAD Jan 13 23:47:25.107000 audit[4516]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4025 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:25.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966343964336238613638393631326337616261626564363936363737 Jan 13 23:47:25.107000 audit: BPF prog-id=181 op=LOAD Jan 13 23:47:25.107000 audit[4516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4025 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:25.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966343964336238613638393631326337616261626564363936363737 Jan 13 23:47:25.220327 containerd[1984]: time="2026-01-13T23:47:25.220175143Z" level=info msg="StartContainer for \"9f49d3b8a689612c7ababed69667737be910654ec1cc358121d62b7539f2b22c\" returns successfully" Jan 13 23:47:25.662013 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 23:47:25.662214 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 23:47:25.891478 kubelet[3522]: I0113 23:47:25.891360 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j5vz9" podStartSLOduration=2.107395238 podStartE2EDuration="20.891298823s" podCreationTimestamp="2026-01-13 23:47:05 +0000 UTC" firstStartedPulling="2026-01-13 23:47:06.079571364 +0000 UTC m=+32.487280338" lastFinishedPulling="2026-01-13 23:47:24.863474961 +0000 UTC m=+51.271183923" observedRunningTime="2026-01-13 23:47:25.283796816 +0000 UTC m=+51.691505802" watchObservedRunningTime="2026-01-13 23:47:25.891298823 +0000 UTC m=+52.299007809" Jan 13 23:47:26.062982 kubelet[3522]: I0113 23:47:26.062714 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5961ca7-c07f-4a8a-924f-01a8914d6008-whisker-ca-bundle\") pod \"f5961ca7-c07f-4a8a-924f-01a8914d6008\" (UID: \"f5961ca7-c07f-4a8a-924f-01a8914d6008\") " Jan 13 23:47:26.062982 kubelet[3522]: I0113 23:47:26.062915 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f5961ca7-c07f-4a8a-924f-01a8914d6008-whisker-backend-key-pair\") pod \"f5961ca7-c07f-4a8a-924f-01a8914d6008\" (UID: \"f5961ca7-c07f-4a8a-924f-01a8914d6008\") " Jan 13 23:47:26.062982 kubelet[3522]: I0113 23:47:26.062964 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ttv5\" (UniqueName: \"kubernetes.io/projected/f5961ca7-c07f-4a8a-924f-01a8914d6008-kube-api-access-6ttv5\") pod \"f5961ca7-c07f-4a8a-924f-01a8914d6008\" (UID: \"f5961ca7-c07f-4a8a-924f-01a8914d6008\") " Jan 13 23:47:26.064275 kubelet[3522]: I0113 23:47:26.064196 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5961ca7-c07f-4a8a-924f-01a8914d6008-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f5961ca7-c07f-4a8a-924f-01a8914d6008" (UID: "f5961ca7-c07f-4a8a-924f-01a8914d6008"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 13 23:47:26.076230 systemd[1]: var-lib-kubelet-pods-f5961ca7\x2dc07f\x2d4a8a\x2d924f\x2d01a8914d6008-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6ttv5.mount: Deactivated successfully. Jan 13 23:47:26.077692 systemd[1]: var-lib-kubelet-pods-f5961ca7\x2dc07f\x2d4a8a\x2d924f\x2d01a8914d6008-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 13 23:47:26.079022 kubelet[3522]: I0113 23:47:26.078224 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5961ca7-c07f-4a8a-924f-01a8914d6008-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f5961ca7-c07f-4a8a-924f-01a8914d6008" (UID: "f5961ca7-c07f-4a8a-924f-01a8914d6008"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 13 23:47:26.082255 kubelet[3522]: I0113 23:47:26.082154 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5961ca7-c07f-4a8a-924f-01a8914d6008-kube-api-access-6ttv5" (OuterVolumeSpecName: "kube-api-access-6ttv5") pod "f5961ca7-c07f-4a8a-924f-01a8914d6008" (UID: "f5961ca7-c07f-4a8a-924f-01a8914d6008"). InnerVolumeSpecName "kube-api-access-6ttv5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 13 23:47:26.164530 kubelet[3522]: I0113 23:47:26.164452 3522 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5961ca7-c07f-4a8a-924f-01a8914d6008-whisker-ca-bundle\") on node \"ip-172-31-28-147\" DevicePath \"\"" Jan 13 23:47:26.164530 kubelet[3522]: I0113 23:47:26.164534 3522 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f5961ca7-c07f-4a8a-924f-01a8914d6008-whisker-backend-key-pair\") on node \"ip-172-31-28-147\" DevicePath \"\"" Jan 13 23:47:26.164758 kubelet[3522]: I0113 23:47:26.164648 3522 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6ttv5\" (UniqueName: \"kubernetes.io/projected/f5961ca7-c07f-4a8a-924f-01a8914d6008-kube-api-access-6ttv5\") on node \"ip-172-31-28-147\" DevicePath \"\"" Jan 13 23:47:26.274586 systemd[1]: Removed slice kubepods-besteffort-podf5961ca7_c07f_4a8a_924f_01a8914d6008.slice - libcontainer container kubepods-besteffort-podf5961ca7_c07f_4a8a_924f_01a8914d6008.slice. Jan 13 23:47:26.426655 systemd[1]: Created slice kubepods-besteffort-podd4afc04a_ba35_4c2d_9726_5e2240fe2e11.slice - libcontainer container kubepods-besteffort-podd4afc04a_ba35_4c2d_9726_5e2240fe2e11.slice. Jan 13 23:47:26.468573 kubelet[3522]: I0113 23:47:26.468482 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62ft2\" (UniqueName: \"kubernetes.io/projected/d4afc04a-ba35-4c2d-9726-5e2240fe2e11-kube-api-access-62ft2\") pod \"whisker-74c798dd46-h9tcx\" (UID: \"d4afc04a-ba35-4c2d-9726-5e2240fe2e11\") " pod="calico-system/whisker-74c798dd46-h9tcx" Jan 13 23:47:26.468736 kubelet[3522]: I0113 23:47:26.468608 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4afc04a-ba35-4c2d-9726-5e2240fe2e11-whisker-ca-bundle\") pod \"whisker-74c798dd46-h9tcx\" (UID: \"d4afc04a-ba35-4c2d-9726-5e2240fe2e11\") " pod="calico-system/whisker-74c798dd46-h9tcx" Jan 13 23:47:26.468736 kubelet[3522]: I0113 23:47:26.468654 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d4afc04a-ba35-4c2d-9726-5e2240fe2e11-whisker-backend-key-pair\") pod \"whisker-74c798dd46-h9tcx\" (UID: \"d4afc04a-ba35-4c2d-9726-5e2240fe2e11\") " pod="calico-system/whisker-74c798dd46-h9tcx" Jan 13 23:47:26.739700 containerd[1984]: time="2026-01-13T23:47:26.739175639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74c798dd46-h9tcx,Uid:d4afc04a-ba35-4c2d-9726-5e2240fe2e11,Namespace:calico-system,Attempt:0,}" Jan 13 23:47:27.904414 kubelet[3522]: I0113 23:47:27.904058 3522 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5961ca7-c07f-4a8a-924f-01a8914d6008" path="/var/lib/kubelet/pods/f5961ca7-c07f-4a8a-924f-01a8914d6008/volumes" Jan 13 23:47:28.279808 (udev-worker)[4577]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:47:28.282329 systemd-networkd[1574]: caliefd94b02907: Link UP Jan 13 23:47:28.286891 systemd-networkd[1574]: caliefd94b02907: Gained carrier Jan 13 23:47:28.420554 containerd[1984]: 2026-01-13 23:47:26.911 [INFO][4628] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 23:47:28.420554 containerd[1984]: 2026-01-13 23:47:27.932 [INFO][4628] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--147-k8s-whisker--74c798dd46--h9tcx-eth0 whisker-74c798dd46- calico-system d4afc04a-ba35-4c2d-9726-5e2240fe2e11 962 0 2026-01-13 23:47:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74c798dd46 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-147 whisker-74c798dd46-h9tcx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliefd94b02907 [] [] }} ContainerID="a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" Namespace="calico-system" Pod="whisker-74c798dd46-h9tcx" WorkloadEndpoint="ip--172--31--28--147-k8s-whisker--74c798dd46--h9tcx-" Jan 13 23:47:28.420554 containerd[1984]: 2026-01-13 23:47:27.932 [INFO][4628] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" Namespace="calico-system" Pod="whisker-74c798dd46-h9tcx" WorkloadEndpoint="ip--172--31--28--147-k8s-whisker--74c798dd46--h9tcx-eth0" Jan 13 23:47:28.420554 containerd[1984]: 2026-01-13 23:47:28.130 [INFO][4725] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" HandleID="k8s-pod-network.a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" Workload="ip--172--31--28--147-k8s-whisker--74c798dd46--h9tcx-eth0" Jan 13 23:47:28.421347 containerd[1984]: 2026-01-13 23:47:28.131 [INFO][4725] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" HandleID="k8s-pod-network.a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" Workload="ip--172--31--28--147-k8s-whisker--74c798dd46--h9tcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003904f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-147", "pod":"whisker-74c798dd46-h9tcx", "timestamp":"2026-01-13 23:47:28.130583398 +0000 UTC"}, Hostname:"ip-172-31-28-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:47:28.421347 containerd[1984]: 2026-01-13 23:47:28.131 [INFO][4725] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:47:28.421347 containerd[1984]: 2026-01-13 23:47:28.132 [INFO][4725] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:47:28.421347 containerd[1984]: 2026-01-13 23:47:28.132 [INFO][4725] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-147' Jan 13 23:47:28.421347 containerd[1984]: 2026-01-13 23:47:28.157 [INFO][4725] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" host="ip-172-31-28-147" Jan 13 23:47:28.421347 containerd[1984]: 2026-01-13 23:47:28.170 [INFO][4725] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-147" Jan 13 23:47:28.421347 containerd[1984]: 2026-01-13 23:47:28.179 [INFO][4725] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:28.421347 containerd[1984]: 2026-01-13 23:47:28.182 [INFO][4725] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:28.421347 containerd[1984]: 2026-01-13 23:47:28.188 [INFO][4725] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:28.422886 containerd[1984]: 2026-01-13 23:47:28.188 [INFO][4725] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" host="ip-172-31-28-147" Jan 13 23:47:28.422886 containerd[1984]: 2026-01-13 23:47:28.192 [INFO][4725] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e Jan 13 23:47:28.422886 containerd[1984]: 2026-01-13 23:47:28.205 [INFO][4725] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" host="ip-172-31-28-147" Jan 13 23:47:28.422886 containerd[1984]: 2026-01-13 23:47:28.222 [INFO][4725] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.193/26] block=192.168.14.192/26 handle="k8s-pod-network.a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" host="ip-172-31-28-147" Jan 13 23:47:28.422886 containerd[1984]: 2026-01-13 23:47:28.222 [INFO][4725] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.193/26] handle="k8s-pod-network.a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" host="ip-172-31-28-147" Jan 13 23:47:28.422886 containerd[1984]: 2026-01-13 23:47:28.222 [INFO][4725] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:47:28.422886 containerd[1984]: 2026-01-13 23:47:28.222 [INFO][4725] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.193/26] IPv6=[] ContainerID="a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" HandleID="k8s-pod-network.a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" Workload="ip--172--31--28--147-k8s-whisker--74c798dd46--h9tcx-eth0" Jan 13 23:47:28.423213 containerd[1984]: 2026-01-13 23:47:28.234 [INFO][4628] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" Namespace="calico-system" Pod="whisker-74c798dd46-h9tcx" WorkloadEndpoint="ip--172--31--28--147-k8s-whisker--74c798dd46--h9tcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--147-k8s-whisker--74c798dd46--h9tcx-eth0", GenerateName:"whisker-74c798dd46-", Namespace:"calico-system", SelfLink:"", UID:"d4afc04a-ba35-4c2d-9726-5e2240fe2e11", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 47, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74c798dd46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-147", ContainerID:"", Pod:"whisker-74c798dd46-h9tcx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.14.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliefd94b02907", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:47:28.423213 containerd[1984]: 2026-01-13 23:47:28.235 [INFO][4628] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.193/32] ContainerID="a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" Namespace="calico-system" Pod="whisker-74c798dd46-h9tcx" WorkloadEndpoint="ip--172--31--28--147-k8s-whisker--74c798dd46--h9tcx-eth0" Jan 13 23:47:28.425037 containerd[1984]: 2026-01-13 23:47:28.235 [INFO][4628] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefd94b02907 ContainerID="a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" Namespace="calico-system" Pod="whisker-74c798dd46-h9tcx" WorkloadEndpoint="ip--172--31--28--147-k8s-whisker--74c798dd46--h9tcx-eth0" Jan 13 23:47:28.425037 containerd[1984]: 2026-01-13 23:47:28.308 [INFO][4628] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" Namespace="calico-system" Pod="whisker-74c798dd46-h9tcx" WorkloadEndpoint="ip--172--31--28--147-k8s-whisker--74c798dd46--h9tcx-eth0" Jan 13 23:47:28.425151 containerd[1984]: 2026-01-13 23:47:28.309 [INFO][4628] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" Namespace="calico-system" Pod="whisker-74c798dd46-h9tcx" WorkloadEndpoint="ip--172--31--28--147-k8s-whisker--74c798dd46--h9tcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--147-k8s-whisker--74c798dd46--h9tcx-eth0", GenerateName:"whisker-74c798dd46-", Namespace:"calico-system", SelfLink:"", UID:"d4afc04a-ba35-4c2d-9726-5e2240fe2e11", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 47, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74c798dd46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-147", ContainerID:"a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e", Pod:"whisker-74c798dd46-h9tcx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.14.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliefd94b02907", MAC:"a6:86:d0:6f:18:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:47:28.425285 containerd[1984]: 2026-01-13 23:47:28.414 [INFO][4628] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" Namespace="calico-system" Pod="whisker-74c798dd46-h9tcx" WorkloadEndpoint="ip--172--31--28--147-k8s-whisker--74c798dd46--h9tcx-eth0" Jan 13 23:47:28.487742 containerd[1984]: time="2026-01-13T23:47:28.487125119Z" level=info msg="connecting to shim a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e" address="unix:///run/containerd/s/681f58d314bac3a21329897ea5931eb1e610081feff28fb0238d7df6e33b3528" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:47:28.586646 systemd[1]: Started cri-containerd-a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e.scope - libcontainer container a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e. Jan 13 23:47:28.612000 audit: BPF prog-id=182 op=LOAD Jan 13 23:47:28.612000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffec950d78 a2=98 a3=ffffec950d68 items=0 ppid=4654 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.612000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:47:28.615000 audit: BPF prog-id=182 op=UNLOAD Jan 13 23:47:28.615000 audit[4790]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffec950d48 a3=0 items=0 ppid=4654 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.615000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:47:28.616000 audit: BPF prog-id=183 op=LOAD Jan 13 23:47:28.616000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffec950c28 a2=74 a3=95 items=0 ppid=4654 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.616000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:47:28.616000 audit: BPF prog-id=183 op=UNLOAD Jan 13 23:47:28.616000 audit[4790]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4654 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.616000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:47:28.616000 audit: BPF prog-id=184 op=LOAD Jan 13 23:47:28.616000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffec950c58 a2=40 a3=ffffec950c88 items=0 ppid=4654 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.616000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:47:28.617000 audit: BPF prog-id=184 op=UNLOAD Jan 13 23:47:28.617000 audit[4790]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffec950c88 items=0 ppid=4654 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.617000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:47:28.639000 audit: BPF prog-id=185 op=LOAD Jan 13 23:47:28.642000 audit: BPF prog-id=186 op=LOAD Jan 13 23:47:28.642000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4754 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353233623130646564363539333538646463323962636239623063 Jan 13 23:47:28.642000 audit: BPF prog-id=186 op=UNLOAD Jan 13 23:47:28.642000 audit[4765]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4754 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353233623130646564363539333538646463323962636239623063 Jan 13 23:47:28.643000 audit: BPF prog-id=187 op=LOAD Jan 13 23:47:28.643000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4754 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353233623130646564363539333538646463323962636239623063 Jan 13 23:47:28.644000 audit: BPF prog-id=188 op=LOAD Jan 13 23:47:28.644000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4754 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353233623130646564363539333538646463323962636239623063 Jan 13 23:47:28.644000 audit: BPF prog-id=188 op=UNLOAD Jan 13 23:47:28.644000 audit[4765]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4754 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353233623130646564363539333538646463323962636239623063 Jan 13 23:47:28.644000 audit: BPF prog-id=187 op=UNLOAD Jan 13 23:47:28.644000 audit[4765]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4754 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353233623130646564363539333538646463323962636239623063 Jan 13 23:47:28.644000 audit: BPF prog-id=189 op=LOAD Jan 13 23:47:28.644000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4754 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353233623130646564363539333538646463323962636239623063 Jan 13 23:47:28.665000 audit: BPF prog-id=190 op=LOAD Jan 13 23:47:28.665000 audit[4794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe25d3ae8 a2=98 a3=ffffe25d3ad8 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.665000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:28.667000 audit: BPF prog-id=190 op=UNLOAD Jan 13 23:47:28.667000 audit[4794]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe25d3ab8 a3=0 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.667000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:28.668000 audit: BPF prog-id=191 op=LOAD Jan 13 23:47:28.668000 audit[4794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe25d3778 a2=74 a3=95 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.668000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:28.671000 audit: BPF prog-id=191 op=UNLOAD Jan 13 23:47:28.671000 audit[4794]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.671000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:28.672000 audit: BPF prog-id=192 op=LOAD Jan 13 23:47:28.672000 audit[4794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe25d37d8 a2=94 a3=2 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.672000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:28.672000 audit: BPF prog-id=192 op=UNLOAD Jan 13 23:47:28.672000 audit[4794]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:28.672000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:28.753235 containerd[1984]: time="2026-01-13T23:47:28.752993737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74c798dd46-h9tcx,Uid:d4afc04a-ba35-4c2d-9726-5e2240fe2e11,Namespace:calico-system,Attempt:0,} returns sandbox id \"a5523b10ded659358ddc29bcb9b0c9afdb69e3e14a060afaec852c0b44d58c1e\"" Jan 13 23:47:28.758921 containerd[1984]: time="2026-01-13T23:47:28.758790157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:47:28.898443 containerd[1984]: time="2026-01-13T23:47:28.897803305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t67lh,Uid:62188975-46ce-424e-8434-9b05ea3b2915,Namespace:calico-system,Attempt:0,}" Jan 13 23:47:28.899890 containerd[1984]: time="2026-01-13T23:47:28.899844325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6fffm,Uid:f27ddc53-e7e4-41b7-97d9-616c5339cc85,Namespace:calico-system,Attempt:0,}" Jan 13 23:47:28.901958 containerd[1984]: time="2026-01-13T23:47:28.899956129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d495df4b7-2bldm,Uid:db4e6234-4e3a-4385-ad17-4564cf1a27b8,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:47:29.043805 containerd[1984]: time="2026-01-13T23:47:29.043716802Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:47:29.047225 containerd[1984]: time="2026-01-13T23:47:29.046590346Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:47:29.047225 containerd[1984]: time="2026-01-13T23:47:29.046785106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:47:29.047445 kubelet[3522]: E0113 23:47:29.047039 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:47:29.047445 kubelet[3522]: E0113 23:47:29.047233 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:47:29.050781 kubelet[3522]: E0113 23:47:29.050616 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d29b182626494232aa116abb84d9adc5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-62ft2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74c798dd46-h9tcx_calico-system(d4afc04a-ba35-4c2d-9726-5e2240fe2e11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:47:29.056212 containerd[1984]: time="2026-01-13T23:47:29.055925266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:47:29.345000 audit: BPF prog-id=193 op=LOAD Jan 13 23:47:29.345000 audit[4794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe25d3798 a2=40 a3=ffffe25d37c8 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.345000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:29.346000 audit: BPF prog-id=193 op=UNLOAD Jan 13 23:47:29.346000 audit[4794]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe25d37c8 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.346000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:29.354634 containerd[1984]: time="2026-01-13T23:47:29.354561564Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:47:29.357366 containerd[1984]: time="2026-01-13T23:47:29.356948064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:47:29.357990 containerd[1984]: time="2026-01-13T23:47:29.357893988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:47:29.358386 kubelet[3522]: E0113 23:47:29.358316 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:47:29.358954 kubelet[3522]: E0113 23:47:29.358391 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:47:29.359919 kubelet[3522]: E0113 23:47:29.359741 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62ft2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74c798dd46-h9tcx_calico-system(d4afc04a-ba35-4c2d-9726-5e2240fe2e11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:47:29.361310 kubelet[3522]: E0113 23:47:29.361062 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74c798dd46-h9tcx" podUID="d4afc04a-ba35-4c2d-9726-5e2240fe2e11" Jan 13 23:47:29.369000 audit: BPF prog-id=194 op=LOAD Jan 13 23:47:29.369000 audit[4794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe25d37a8 a2=94 a3=4 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.369000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:29.369000 audit: BPF prog-id=194 op=UNLOAD Jan 13 23:47:29.369000 audit[4794]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.369000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:29.370000 audit: BPF prog-id=195 op=LOAD Jan 13 23:47:29.370000 audit[4794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe25d35e8 a2=94 a3=5 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.370000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:29.371000 audit: BPF prog-id=195 op=UNLOAD Jan 13 23:47:29.371000 audit[4794]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.371000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:29.371000 audit: BPF prog-id=196 op=LOAD Jan 13 23:47:29.371000 audit[4794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe25d3818 a2=94 a3=6 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.371000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:29.372000 audit: BPF prog-id=196 op=UNLOAD Jan 13 23:47:29.372000 audit[4794]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.372000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:29.372000 audit: BPF prog-id=197 op=LOAD Jan 13 23:47:29.372000 audit[4794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe25d2fe8 a2=94 a3=83 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.372000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:29.373000 audit: BPF prog-id=198 op=LOAD Jan 13 23:47:29.373000 audit[4794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe25d2da8 a2=94 a3=2 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.373000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:29.373000 audit: BPF prog-id=198 op=UNLOAD Jan 13 23:47:29.373000 audit[4794]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.373000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:29.373000 audit: BPF prog-id=197 op=UNLOAD Jan 13 23:47:29.373000 audit[4794]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=fc3d620 a3=fc30b00 items=0 ppid=4654 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.373000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:47:29.442000 audit: BPF prog-id=199 op=LOAD Jan 13 23:47:29.442000 audit[4883]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff4855978 a2=98 a3=fffff4855968 items=0 ppid=4654 pid=4883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.442000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:47:29.442000 audit: BPF prog-id=199 op=UNLOAD Jan 13 23:47:29.442000 audit[4883]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff4855948 a3=0 items=0 ppid=4654 pid=4883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.442000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:47:29.442000 audit: BPF prog-id=200 op=LOAD Jan 13 23:47:29.442000 audit[4883]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff4855828 a2=74 a3=95 items=0 ppid=4654 pid=4883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.442000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:47:29.442000 audit: BPF prog-id=200 op=UNLOAD Jan 13 23:47:29.442000 audit[4883]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4654 pid=4883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.442000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:47:29.442000 audit: BPF prog-id=201 op=LOAD Jan 13 23:47:29.442000 audit[4883]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff4855858 a2=40 a3=fffff4855888 items=0 ppid=4654 pid=4883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.442000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:47:29.442000 audit: BPF prog-id=201 op=UNLOAD Jan 13 23:47:29.442000 audit[4883]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff4855888 items=0 ppid=4654 pid=4883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.442000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:47:29.458112 systemd-networkd[1574]: cali22dd70a2d22: Link UP Jan 13 23:47:29.460718 systemd-networkd[1574]: cali22dd70a2d22: Gained carrier Jan 13 23:47:29.503544 containerd[1984]: 2026-01-13 23:47:29.145 [INFO][4823] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--147-k8s-csi--node--driver--t67lh-eth0 csi-node-driver- calico-system 62188975-46ce-424e-8434-9b05ea3b2915 773 0 2026-01-13 23:47:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-147 csi-node-driver-t67lh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali22dd70a2d22 [] [] }} ContainerID="e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" Namespace="calico-system" Pod="csi-node-driver-t67lh" WorkloadEndpoint="ip--172--31--28--147-k8s-csi--node--driver--t67lh-" Jan 13 23:47:29.503544 containerd[1984]: 2026-01-13 23:47:29.145 [INFO][4823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" Namespace="calico-system" Pod="csi-node-driver-t67lh" WorkloadEndpoint="ip--172--31--28--147-k8s-csi--node--driver--t67lh-eth0" Jan 13 23:47:29.503544 containerd[1984]: 2026-01-13 23:47:29.284 [INFO][4860] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" HandleID="k8s-pod-network.e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" Workload="ip--172--31--28--147-k8s-csi--node--driver--t67lh-eth0" Jan 13 23:47:29.504252 containerd[1984]: 2026-01-13 23:47:29.285 [INFO][4860] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" HandleID="k8s-pod-network.e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" Workload="ip--172--31--28--147-k8s-csi--node--driver--t67lh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3020), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-147", "pod":"csi-node-driver-t67lh", "timestamp":"2026-01-13 23:47:29.284429471 +0000 UTC"}, Hostname:"ip-172-31-28-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:47:29.504252 containerd[1984]: 2026-01-13 23:47:29.286 [INFO][4860] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:47:29.504252 containerd[1984]: 2026-01-13 23:47:29.286 [INFO][4860] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:47:29.504252 containerd[1984]: 2026-01-13 23:47:29.286 [INFO][4860] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-147' Jan 13 23:47:29.504252 containerd[1984]: 2026-01-13 23:47:29.320 [INFO][4860] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" host="ip-172-31-28-147" Jan 13 23:47:29.504252 containerd[1984]: 2026-01-13 23:47:29.332 [INFO][4860] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-147" Jan 13 23:47:29.504252 containerd[1984]: 2026-01-13 23:47:29.360 [INFO][4860] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:29.504252 containerd[1984]: 2026-01-13 23:47:29.378 [INFO][4860] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:29.504252 containerd[1984]: 2026-01-13 23:47:29.386 [INFO][4860] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:29.509041 containerd[1984]: 2026-01-13 23:47:29.386 [INFO][4860] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" host="ip-172-31-28-147" Jan 13 23:47:29.509041 containerd[1984]: 2026-01-13 23:47:29.392 [INFO][4860] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25 Jan 13 23:47:29.509041 containerd[1984]: 2026-01-13 23:47:29.404 [INFO][4860] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" host="ip-172-31-28-147" Jan 13 23:47:29.509041 containerd[1984]: 2026-01-13 23:47:29.426 [INFO][4860] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.194/26] block=192.168.14.192/26 handle="k8s-pod-network.e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" host="ip-172-31-28-147" Jan 13 23:47:29.509041 containerd[1984]: 2026-01-13 23:47:29.427 [INFO][4860] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.194/26] handle="k8s-pod-network.e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" host="ip-172-31-28-147" Jan 13 23:47:29.509041 containerd[1984]: 2026-01-13 23:47:29.427 [INFO][4860] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:47:29.509041 containerd[1984]: 2026-01-13 23:47:29.427 [INFO][4860] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.194/26] IPv6=[] ContainerID="e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" HandleID="k8s-pod-network.e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" Workload="ip--172--31--28--147-k8s-csi--node--driver--t67lh-eth0" Jan 13 23:47:29.509362 containerd[1984]: 2026-01-13 23:47:29.433 [INFO][4823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" Namespace="calico-system" Pod="csi-node-driver-t67lh" WorkloadEndpoint="ip--172--31--28--147-k8s-csi--node--driver--t67lh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--147-k8s-csi--node--driver--t67lh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"62188975-46ce-424e-8434-9b05ea3b2915", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 47, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-147", ContainerID:"", Pod:"csi-node-driver-t67lh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22dd70a2d22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:47:29.511831 containerd[1984]: 2026-01-13 23:47:29.433 [INFO][4823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.194/32] ContainerID="e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" Namespace="calico-system" Pod="csi-node-driver-t67lh" WorkloadEndpoint="ip--172--31--28--147-k8s-csi--node--driver--t67lh-eth0" Jan 13 23:47:29.511831 containerd[1984]: 2026-01-13 23:47:29.433 [INFO][4823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22dd70a2d22 ContainerID="e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" Namespace="calico-system" Pod="csi-node-driver-t67lh" WorkloadEndpoint="ip--172--31--28--147-k8s-csi--node--driver--t67lh-eth0" Jan 13 23:47:29.511831 containerd[1984]: 2026-01-13 23:47:29.465 [INFO][4823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" Namespace="calico-system" Pod="csi-node-driver-t67lh" WorkloadEndpoint="ip--172--31--28--147-k8s-csi--node--driver--t67lh-eth0" Jan 13 23:47:29.512009 containerd[1984]: 2026-01-13 23:47:29.468 [INFO][4823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" Namespace="calico-system" Pod="csi-node-driver-t67lh" WorkloadEndpoint="ip--172--31--28--147-k8s-csi--node--driver--t67lh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--147-k8s-csi--node--driver--t67lh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"62188975-46ce-424e-8434-9b05ea3b2915", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 47, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-147", ContainerID:"e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25", Pod:"csi-node-driver-t67lh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22dd70a2d22", MAC:"22:f0:9a:d8:3f:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:47:29.512138 containerd[1984]: 2026-01-13 23:47:29.496 [INFO][4823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" Namespace="calico-system" Pod="csi-node-driver-t67lh" WorkloadEndpoint="ip--172--31--28--147-k8s-csi--node--driver--t67lh-eth0" Jan 13 23:47:29.597667 containerd[1984]: time="2026-01-13T23:47:29.597596113Z" level=info msg="connecting to shim e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25" address="unix:///run/containerd/s/1f8d4d7336c385746680b0038a9ac72f069c7ddd7da605455c02d7f168094056" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:47:29.607329 systemd-networkd[1574]: cali59e8bd6c8e1: Link UP Jan 13 23:47:29.618361 systemd-networkd[1574]: cali59e8bd6c8e1: Gained carrier Jan 13 23:47:29.679364 containerd[1984]: 2026-01-13 23:47:29.186 [INFO][4837] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--147-k8s-goldmane--666569f655--6fffm-eth0 goldmane-666569f655- calico-system f27ddc53-e7e4-41b7-97d9-616c5339cc85 886 0 2026-01-13 23:47:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-147 goldmane-666569f655-6fffm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali59e8bd6c8e1 [] [] }} ContainerID="fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" Namespace="calico-system" Pod="goldmane-666569f655-6fffm" WorkloadEndpoint="ip--172--31--28--147-k8s-goldmane--666569f655--6fffm-" Jan 13 23:47:29.679364 containerd[1984]: 2026-01-13 23:47:29.187 [INFO][4837] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" Namespace="calico-system" Pod="goldmane-666569f655-6fffm" WorkloadEndpoint="ip--172--31--28--147-k8s-goldmane--666569f655--6fffm-eth0" Jan 13 23:47:29.679364 containerd[1984]: 2026-01-13 23:47:29.387 [INFO][4866] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" HandleID="k8s-pod-network.fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" Workload="ip--172--31--28--147-k8s-goldmane--666569f655--6fffm-eth0" Jan 13 23:47:29.680570 containerd[1984]: 2026-01-13 23:47:29.388 [INFO][4866] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" HandleID="k8s-pod-network.fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" Workload="ip--172--31--28--147-k8s-goldmane--666569f655--6fffm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121890), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-147", "pod":"goldmane-666569f655-6fffm", "timestamp":"2026-01-13 23:47:29.387889716 +0000 UTC"}, Hostname:"ip-172-31-28-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:47:29.680570 containerd[1984]: 2026-01-13 23:47:29.388 [INFO][4866] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:47:29.680570 containerd[1984]: 2026-01-13 23:47:29.427 [INFO][4866] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:47:29.680570 containerd[1984]: 2026-01-13 23:47:29.427 [INFO][4866] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-147' Jan 13 23:47:29.680570 containerd[1984]: 2026-01-13 23:47:29.478 [INFO][4866] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" host="ip-172-31-28-147" Jan 13 23:47:29.680570 containerd[1984]: 2026-01-13 23:47:29.493 [INFO][4866] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-147" Jan 13 23:47:29.680570 containerd[1984]: 2026-01-13 23:47:29.515 [INFO][4866] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:29.680570 containerd[1984]: 2026-01-13 23:47:29.522 [INFO][4866] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:29.680570 containerd[1984]: 2026-01-13 23:47:29.527 [INFO][4866] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:29.681102 containerd[1984]: 2026-01-13 23:47:29.527 [INFO][4866] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" host="ip-172-31-28-147" Jan 13 23:47:29.681102 containerd[1984]: 2026-01-13 23:47:29.531 [INFO][4866] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a Jan 13 23:47:29.681102 containerd[1984]: 2026-01-13 23:47:29.538 [INFO][4866] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" host="ip-172-31-28-147" Jan 13 23:47:29.681102 containerd[1984]: 2026-01-13 23:47:29.566 [INFO][4866] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.195/26] block=192.168.14.192/26 handle="k8s-pod-network.fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" host="ip-172-31-28-147" Jan 13 23:47:29.681102 containerd[1984]: 2026-01-13 23:47:29.567 [INFO][4866] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.195/26] handle="k8s-pod-network.fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" host="ip-172-31-28-147" Jan 13 23:47:29.681102 containerd[1984]: 2026-01-13 23:47:29.567 [INFO][4866] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:47:29.681102 containerd[1984]: 2026-01-13 23:47:29.569 [INFO][4866] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.195/26] IPv6=[] ContainerID="fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" HandleID="k8s-pod-network.fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" Workload="ip--172--31--28--147-k8s-goldmane--666569f655--6fffm-eth0" Jan 13 23:47:29.683182 containerd[1984]: 2026-01-13 23:47:29.586 [INFO][4837] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" Namespace="calico-system" Pod="goldmane-666569f655-6fffm" WorkloadEndpoint="ip--172--31--28--147-k8s-goldmane--666569f655--6fffm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--147-k8s-goldmane--666569f655--6fffm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f27ddc53-e7e4-41b7-97d9-616c5339cc85", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 47, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-147", ContainerID:"", Pod:"goldmane-666569f655-6fffm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali59e8bd6c8e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:47:29.683182 containerd[1984]: 2026-01-13 23:47:29.587 [INFO][4837] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.195/32] ContainerID="fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" Namespace="calico-system" Pod="goldmane-666569f655-6fffm" WorkloadEndpoint="ip--172--31--28--147-k8s-goldmane--666569f655--6fffm-eth0" Jan 13 23:47:29.683406 containerd[1984]: 2026-01-13 23:47:29.587 [INFO][4837] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59e8bd6c8e1 ContainerID="fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" Namespace="calico-system" Pod="goldmane-666569f655-6fffm" WorkloadEndpoint="ip--172--31--28--147-k8s-goldmane--666569f655--6fffm-eth0" Jan 13 23:47:29.683406 containerd[1984]: 2026-01-13 23:47:29.622 [INFO][4837] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" Namespace="calico-system" Pod="goldmane-666569f655-6fffm" WorkloadEndpoint="ip--172--31--28--147-k8s-goldmane--666569f655--6fffm-eth0" Jan 13 23:47:29.683721 containerd[1984]: 2026-01-13 23:47:29.623 [INFO][4837] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" Namespace="calico-system" Pod="goldmane-666569f655-6fffm" WorkloadEndpoint="ip--172--31--28--147-k8s-goldmane--666569f655--6fffm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--147-k8s-goldmane--666569f655--6fffm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f27ddc53-e7e4-41b7-97d9-616c5339cc85", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 47, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-147", ContainerID:"fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a", Pod:"goldmane-666569f655-6fffm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali59e8bd6c8e1", MAC:"3e:4b:09:08:23:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:47:29.683877 containerd[1984]: 2026-01-13 23:47:29.666 [INFO][4837] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" Namespace="calico-system" Pod="goldmane-666569f655-6fffm" WorkloadEndpoint="ip--172--31--28--147-k8s-goldmane--666569f655--6fffm-eth0" Jan 13 23:47:29.756883 systemd[1]: Started cri-containerd-e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25.scope - libcontainer container e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25. Jan 13 23:47:29.787918 systemd-networkd[1574]: cali2de7a7e270c: Link UP Jan 13 23:47:29.792981 systemd-networkd[1574]: cali2de7a7e270c: Gained carrier Jan 13 23:47:29.818976 containerd[1984]: time="2026-01-13T23:47:29.818919674Z" level=info msg="connecting to shim fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a" address="unix:///run/containerd/s/f4791ea7791af9fbbd1936f783a46efe3cc16d43369ec19df3c6dc9ea3dc662a" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:47:29.878811 containerd[1984]: 2026-01-13 23:47:29.218 [INFO][4835] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--2bldm-eth0 calico-apiserver-5d495df4b7- calico-apiserver db4e6234-4e3a-4385-ad17-4564cf1a27b8 884 0 2026-01-13 23:46:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d495df4b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-147 calico-apiserver-5d495df4b7-2bldm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2de7a7e270c [] [] }} ContainerID="536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" Namespace="calico-apiserver" Pod="calico-apiserver-5d495df4b7-2bldm" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--2bldm-" Jan 13 23:47:29.878811 containerd[1984]: 2026-01-13 23:47:29.219 [INFO][4835] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" Namespace="calico-apiserver" Pod="calico-apiserver-5d495df4b7-2bldm" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--2bldm-eth0" Jan 13 23:47:29.878811 containerd[1984]: 2026-01-13 23:47:29.411 [INFO][4871] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" HandleID="k8s-pod-network.536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" Workload="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--2bldm-eth0" Jan 13 23:47:29.879122 containerd[1984]: 2026-01-13 23:47:29.416 [INFO][4871] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" HandleID="k8s-pod-network.536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" Workload="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--2bldm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002af700), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-147", "pod":"calico-apiserver-5d495df4b7-2bldm", "timestamp":"2026-01-13 23:47:29.411381084 +0000 UTC"}, Hostname:"ip-172-31-28-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:47:29.879122 containerd[1984]: 2026-01-13 23:47:29.416 [INFO][4871] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:47:29.879122 containerd[1984]: 2026-01-13 23:47:29.567 [INFO][4871] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:47:29.879122 containerd[1984]: 2026-01-13 23:47:29.568 [INFO][4871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-147' Jan 13 23:47:29.879122 containerd[1984]: 2026-01-13 23:47:29.617 [INFO][4871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" host="ip-172-31-28-147" Jan 13 23:47:29.879122 containerd[1984]: 2026-01-13 23:47:29.658 [INFO][4871] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-147" Jan 13 23:47:29.879122 containerd[1984]: 2026-01-13 23:47:29.673 [INFO][4871] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:29.879122 containerd[1984]: 2026-01-13 23:47:29.683 [INFO][4871] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:29.879122 containerd[1984]: 2026-01-13 23:47:29.694 [INFO][4871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:29.882711 containerd[1984]: 2026-01-13 23:47:29.695 [INFO][4871] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" host="ip-172-31-28-147" Jan 13 23:47:29.882711 containerd[1984]: 2026-01-13 23:47:29.700 [INFO][4871] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715 Jan 13 23:47:29.882711 containerd[1984]: 2026-01-13 23:47:29.718 [INFO][4871] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" host="ip-172-31-28-147" Jan 13 23:47:29.882711 containerd[1984]: 2026-01-13 23:47:29.764 [INFO][4871] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.196/26] block=192.168.14.192/26 handle="k8s-pod-network.536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" host="ip-172-31-28-147" Jan 13 23:47:29.882711 containerd[1984]: 2026-01-13 23:47:29.765 [INFO][4871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.196/26] handle="k8s-pod-network.536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" host="ip-172-31-28-147" Jan 13 23:47:29.882711 containerd[1984]: 2026-01-13 23:47:29.765 [INFO][4871] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:47:29.882711 containerd[1984]: 2026-01-13 23:47:29.765 [INFO][4871] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.196/26] IPv6=[] ContainerID="536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" HandleID="k8s-pod-network.536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" Workload="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--2bldm-eth0" Jan 13 23:47:29.883067 containerd[1984]: 2026-01-13 23:47:29.773 [INFO][4835] cni-plugin/k8s.go 418: Populated endpoint ContainerID="536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" Namespace="calico-apiserver" Pod="calico-apiserver-5d495df4b7-2bldm" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--2bldm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--2bldm-eth0", GenerateName:"calico-apiserver-5d495df4b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"db4e6234-4e3a-4385-ad17-4564cf1a27b8", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d495df4b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-147", ContainerID:"", Pod:"calico-apiserver-5d495df4b7-2bldm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2de7a7e270c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:47:29.883235 containerd[1984]: 2026-01-13 23:47:29.774 [INFO][4835] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.196/32] ContainerID="536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" Namespace="calico-apiserver" Pod="calico-apiserver-5d495df4b7-2bldm" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--2bldm-eth0" Jan 13 23:47:29.883235 containerd[1984]: 2026-01-13 23:47:29.774 [INFO][4835] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2de7a7e270c ContainerID="536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" Namespace="calico-apiserver" Pod="calico-apiserver-5d495df4b7-2bldm" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--2bldm-eth0" Jan 13 23:47:29.883235 containerd[1984]: 2026-01-13 23:47:29.806 [INFO][4835] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" Namespace="calico-apiserver" Pod="calico-apiserver-5d495df4b7-2bldm" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--2bldm-eth0" Jan 13 23:47:29.883390 containerd[1984]: 2026-01-13 23:47:29.814 [INFO][4835] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" Namespace="calico-apiserver" Pod="calico-apiserver-5d495df4b7-2bldm" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--2bldm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--2bldm-eth0", GenerateName:"calico-apiserver-5d495df4b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"db4e6234-4e3a-4385-ad17-4564cf1a27b8", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d495df4b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-147", ContainerID:"536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715", Pod:"calico-apiserver-5d495df4b7-2bldm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2de7a7e270c", MAC:"aa:e8:63:8e:90:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:47:29.884625 containerd[1984]: 2026-01-13 23:47:29.863 [INFO][4835] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" Namespace="calico-apiserver" Pod="calico-apiserver-5d495df4b7-2bldm" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--2bldm-eth0" Jan 13 23:47:29.900383 containerd[1984]: time="2026-01-13T23:47:29.900014870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tfxlt,Uid:cc252f99-0335-4f8d-b959-3bfc25386e2c,Namespace:kube-system,Attempt:0,}" Jan 13 23:47:29.923471 systemd-networkd[1574]: vxlan.calico: Link UP Jan 13 23:47:29.924343 systemd-networkd[1574]: vxlan.calico: Gained carrier Jan 13 23:47:29.929600 systemd-networkd[1574]: caliefd94b02907: Gained IPv6LL Jan 13 23:47:29.986000 audit: BPF prog-id=202 op=LOAD Jan 13 23:47:29.996000 audit: BPF prog-id=203 op=LOAD Jan 13 23:47:29.996000 audit[4925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4911 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532343433333137373266343134643266646532666565386530363564 Jan 13 23:47:29.996000 audit: BPF prog-id=203 op=UNLOAD Jan 13 23:47:29.996000 audit[4925]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4911 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:29.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532343433333137373266343134643266646532666565386530363564 Jan 13 23:47:30.007000 audit: BPF prog-id=204 op=LOAD Jan 13 23:47:30.007000 audit[4925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4911 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532343433333137373266343134643266646532666565386530363564 Jan 13 23:47:30.009000 audit: BPF prog-id=205 op=LOAD Jan 13 23:47:30.009000 audit[4925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4911 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532343433333137373266343134643266646532666565386530363564 Jan 13 23:47:30.009000 audit: BPF prog-id=205 op=UNLOAD Jan 13 23:47:30.009000 audit[4925]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4911 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532343433333137373266343134643266646532666565386530363564 Jan 13 23:47:30.009000 audit: BPF prog-id=204 op=UNLOAD Jan 13 23:47:30.009000 audit[4925]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4911 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532343433333137373266343134643266646532666565386530363564 Jan 13 23:47:30.009000 audit: BPF prog-id=206 op=LOAD Jan 13 23:47:30.009000 audit[4925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4911 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532343433333137373266343134643266646532666565386530363564 Jan 13 23:47:30.025917 systemd[1]: Started cri-containerd-fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a.scope - libcontainer container fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a. Jan 13 23:47:30.110867 containerd[1984]: time="2026-01-13T23:47:30.110800331Z" level=info msg="connecting to shim 536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715" address="unix:///run/containerd/s/e6e7ef6084e894bd410e17c5f231e8abbef759f51b11e44b2f9eb0d61266d0d0" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:47:30.117682 containerd[1984]: time="2026-01-13T23:47:30.117444780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t67lh,Uid:62188975-46ce-424e-8434-9b05ea3b2915,Namespace:calico-system,Attempt:0,} returns sandbox id \"e244331772f414d2fde2fee8e065d2cde8f175299658e52eb284d6c245a49c25\"" Jan 13 23:47:30.125304 containerd[1984]: time="2026-01-13T23:47:30.124648692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:47:30.176851 kernel: kauditd_printk_skb: 139 callbacks suppressed Jan 13 23:47:30.177860 kernel: audit: type=1334 audit(1768348050.172:649): prog-id=207 op=LOAD Jan 13 23:47:30.172000 audit: BPF prog-id=207 op=LOAD Jan 13 23:47:30.177000 audit: BPF prog-id=208 op=LOAD Jan 13 23:47:30.188471 kernel: audit: type=1334 audit(1768348050.177:650): prog-id=208 op=LOAD Jan 13 23:47:30.189118 kernel: audit: type=1300 audit(1768348050.177:650): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4950 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.177000 audit[4974]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4950 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.201818 kernel: audit: type=1327 audit(1768348050.177:650): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664666635653239646263346633303638306466306335303530313366 Jan 13 23:47:30.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664666635653239646263346633303638306466306335303530313366 Jan 13 23:47:30.209162 kernel: audit: type=1334 audit(1768348050.177:651): prog-id=208 op=UNLOAD Jan 13 23:47:30.177000 audit: BPF prog-id=208 op=UNLOAD Jan 13 23:47:30.217826 kernel: audit: type=1300 audit(1768348050.177:651): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4950 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.177000 audit[4974]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4950 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.228835 kernel: audit: type=1327 audit(1768348050.177:651): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664666635653239646263346633303638306466306335303530313366 Jan 13 23:47:30.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664666635653239646263346633303638306466306335303530313366 Jan 13 23:47:30.177000 audit: BPF prog-id=209 op=LOAD Jan 13 23:47:30.231062 kernel: audit: type=1334 audit(1768348050.177:652): prog-id=209 op=LOAD Jan 13 23:47:30.177000 audit[4974]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4950 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664666635653239646263346633303638306466306335303530313366 Jan 13 23:47:30.244744 kernel: audit: type=1300 audit(1768348050.177:652): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4950 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.244986 kernel: audit: type=1327 audit(1768348050.177:652): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664666635653239646263346633303638306466306335303530313366 Jan 13 23:47:30.179000 audit: BPF prog-id=210 op=LOAD Jan 13 23:47:30.179000 audit[4974]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4950 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664666635653239646263346633303638306466306335303530313366 Jan 13 23:47:30.187000 audit: BPF prog-id=210 op=UNLOAD Jan 13 23:47:30.187000 audit[4974]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4950 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664666635653239646263346633303638306466306335303530313366 Jan 13 23:47:30.187000 audit: BPF prog-id=209 op=UNLOAD Jan 13 23:47:30.187000 audit[4974]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4950 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664666635653239646263346633303638306466306335303530313366 Jan 13 23:47:30.187000 audit: BPF prog-id=211 op=LOAD Jan 13 23:47:30.187000 audit[4974]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4950 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664666635653239646263346633303638306466306335303530313366 Jan 13 23:47:30.281622 systemd[1]: Started cri-containerd-536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715.scope - libcontainer container 536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715. Jan 13 23:47:30.293656 kubelet[3522]: E0113 23:47:30.293578 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74c798dd46-h9tcx" podUID="d4afc04a-ba35-4c2d-9726-5e2240fe2e11" Jan 13 23:47:30.418658 (udev-worker)[4576]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:47:30.423072 containerd[1984]: time="2026-01-13T23:47:30.421738177Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:47:30.425657 containerd[1984]: time="2026-01-13T23:47:30.424692025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:47:30.426611 containerd[1984]: time="2026-01-13T23:47:30.425951473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:47:30.431842 kubelet[3522]: E0113 23:47:30.430726 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:47:30.431842 kubelet[3522]: E0113 23:47:30.430805 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:47:30.431842 kubelet[3522]: E0113 23:47:30.430987 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbgsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t67lh_calico-system(62188975-46ce-424e-8434-9b05ea3b2915): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:47:30.437145 containerd[1984]: time="2026-01-13T23:47:30.436587805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:47:30.448000 audit[5076]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5076 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:30.448000 audit[5076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd2dfc2e0 a2=0 a3=1 items=0 ppid=3635 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.448000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:30.451000 audit: BPF prog-id=212 op=LOAD Jan 13 23:47:30.451000 audit[5079]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff7582518 a2=98 a3=fffff7582508 items=0 ppid=4654 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.451000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:47:30.452000 audit: BPF prog-id=212 op=UNLOAD Jan 13 23:47:30.452000 audit[5079]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff75824e8 a3=0 items=0 ppid=4654 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.452000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:47:30.452000 audit: BPF prog-id=213 op=LOAD Jan 13 23:47:30.452000 audit[5079]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff75821f8 a2=74 a3=95 items=0 ppid=4654 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.452000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:47:30.452000 audit: BPF prog-id=213 op=UNLOAD Jan 13 23:47:30.452000 audit[5079]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4654 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.452000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:47:30.452000 audit: BPF prog-id=214 op=LOAD Jan 13 23:47:30.452000 audit[5079]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff7582258 a2=94 a3=2 items=0 ppid=4654 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.452000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:47:30.456000 audit: BPF prog-id=214 op=UNLOAD Jan 13 23:47:30.456000 audit[5079]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4654 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.456000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:47:30.456000 audit: BPF prog-id=215 op=LOAD Jan 13 23:47:30.459000 audit[5076]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=5076 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:30.456000 audit[5079]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff75820d8 a2=40 a3=fffff7582108 items=0 ppid=4654 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.456000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:47:30.460000 audit: BPF prog-id=215 op=UNLOAD Jan 13 23:47:30.460000 audit[5079]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=fffff7582108 items=0 ppid=4654 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.460000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:47:30.460000 audit: BPF prog-id=216 op=LOAD Jan 13 23:47:30.460000 audit[5079]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff7582228 a2=94 a3=b7 items=0 ppid=4654 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.460000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:47:30.460000 audit: BPF prog-id=216 op=UNLOAD Jan 13 23:47:30.460000 audit[5079]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4654 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.460000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:47:30.459000 audit[5076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd2dfc2e0 a2=0 a3=1 items=0 ppid=3635 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.459000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:30.473000 audit: BPF prog-id=217 op=LOAD Jan 13 23:47:30.473000 audit[5079]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff75818d8 a2=94 a3=2 items=0 ppid=4654 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.473000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:47:30.478000 audit: BPF prog-id=217 op=UNLOAD Jan 13 23:47:30.478000 audit[5079]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4654 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.478000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:47:30.478000 audit: BPF prog-id=218 op=LOAD Jan 13 23:47:30.478000 audit[5079]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff7581a68 a2=94 a3=30 items=0 ppid=4654 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.478000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:47:30.493000 audit: BPF prog-id=219 op=LOAD Jan 13 23:47:30.493000 audit[5082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe110b0d8 a2=98 a3=ffffe110b0c8 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.493000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:30.495000 audit: BPF prog-id=219 op=UNLOAD Jan 13 23:47:30.495000 audit[5082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe110b0a8 a3=0 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.495000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:30.495000 audit: BPF prog-id=220 op=LOAD Jan 13 23:47:30.495000 audit[5082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe110ad68 a2=74 a3=95 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.495000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:30.496000 audit: BPF prog-id=220 op=UNLOAD Jan 13 23:47:30.496000 audit[5082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.496000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:30.496000 audit: BPF prog-id=221 op=LOAD Jan 13 23:47:30.496000 audit[5082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe110adc8 a2=94 a3=2 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.496000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:30.497000 audit: BPF prog-id=221 op=UNLOAD Jan 13 23:47:30.497000 audit[5082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.497000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:30.497000 audit: BPF prog-id=222 op=LOAD Jan 13 23:47:30.500000 audit: BPF prog-id=223 op=LOAD Jan 13 23:47:30.500000 audit[5043]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=5027 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.500000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533363338326532316134376433626637336630373832623165626536 Jan 13 23:47:30.500000 audit: BPF prog-id=223 op=UNLOAD Jan 13 23:47:30.500000 audit[5043]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5027 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.500000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533363338326532316134376433626637336630373832623165626536 Jan 13 23:47:30.500000 audit: BPF prog-id=224 op=LOAD Jan 13 23:47:30.500000 audit[5043]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=5027 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.500000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533363338326532316134376433626637336630373832623165626536 Jan 13 23:47:30.501000 audit: BPF prog-id=225 op=LOAD Jan 13 23:47:30.501000 audit[5043]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=5027 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533363338326532316134376433626637336630373832623165626536 Jan 13 23:47:30.501000 audit: BPF prog-id=225 op=UNLOAD Jan 13 23:47:30.501000 audit[5043]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5027 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533363338326532316134376433626637336630373832623165626536 Jan 13 23:47:30.501000 audit: BPF prog-id=224 op=UNLOAD Jan 13 23:47:30.501000 audit[5043]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5027 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533363338326532316134376433626637336630373832623165626536 Jan 13 23:47:30.502000 audit: BPF prog-id=226 op=LOAD Jan 13 23:47:30.502000 audit[5043]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=5027 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533363338326532316134376433626637336630373832623165626536 Jan 13 23:47:30.574664 containerd[1984]: time="2026-01-13T23:47:30.573823898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6fffm,Uid:f27ddc53-e7e4-41b7-97d9-616c5339cc85,Namespace:calico-system,Attempt:0,} returns sandbox id \"fdff5e29dbc4f30680df0c505013f1cb53e0daeafae790f1706e8f4e284e247a\"" Jan 13 23:47:30.665765 containerd[1984]: time="2026-01-13T23:47:30.665645918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d495df4b7-2bldm,Uid:db4e6234-4e3a-4385-ad17-4564cf1a27b8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"536382e21a47d3bf73f0782b1ebe6a5360a50f73b394611e7cb2fd5e15598715\"" Jan 13 23:47:30.696458 systemd-networkd[1574]: cali40577fdd299: Link UP Jan 13 23:47:30.706459 systemd-networkd[1574]: cali40577fdd299: Gained carrier Jan 13 23:47:30.722307 containerd[1984]: time="2026-01-13T23:47:30.722254479Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:47:30.725201 containerd[1984]: time="2026-01-13T23:47:30.724987611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:47:30.726198 containerd[1984]: time="2026-01-13T23:47:30.725122923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:47:30.726708 kubelet[3522]: E0113 23:47:30.726657 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:47:30.728265 kubelet[3522]: E0113 23:47:30.726893 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:47:30.728265 kubelet[3522]: E0113 23:47:30.727779 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbgsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t67lh_calico-system(62188975-46ce-424e-8434-9b05ea3b2915): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:47:30.728912 containerd[1984]: time="2026-01-13T23:47:30.728860671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:47:30.730352 kubelet[3522]: E0113 23:47:30.730258 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:47:30.748273 containerd[1984]: 2026-01-13 23:47:30.352 [INFO][4993] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--147-k8s-coredns--674b8bbfcf--tfxlt-eth0 coredns-674b8bbfcf- kube-system cc252f99-0335-4f8d-b959-3bfc25386e2c 879 0 2026-01-13 23:46:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-147 coredns-674b8bbfcf-tfxlt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali40577fdd299 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-tfxlt" WorkloadEndpoint="ip--172--31--28--147-k8s-coredns--674b8bbfcf--tfxlt-" Jan 13 23:47:30.748273 containerd[1984]: 2026-01-13 23:47:30.353 [INFO][4993] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-tfxlt" WorkloadEndpoint="ip--172--31--28--147-k8s-coredns--674b8bbfcf--tfxlt-eth0" Jan 13 23:47:30.748273 containerd[1984]: 2026-01-13 23:47:30.557 [INFO][5070] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" HandleID="k8s-pod-network.738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" Workload="ip--172--31--28--147-k8s-coredns--674b8bbfcf--tfxlt-eth0" Jan 13 23:47:30.748627 containerd[1984]: 2026-01-13 23:47:30.558 [INFO][5070] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" HandleID="k8s-pod-network.738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" Workload="ip--172--31--28--147-k8s-coredns--674b8bbfcf--tfxlt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000314130), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-147", "pod":"coredns-674b8bbfcf-tfxlt", "timestamp":"2026-01-13 23:47:30.557977586 +0000 UTC"}, Hostname:"ip-172-31-28-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:47:30.748627 containerd[1984]: 2026-01-13 23:47:30.562 [INFO][5070] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:47:30.748627 containerd[1984]: 2026-01-13 23:47:30.563 [INFO][5070] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:47:30.748627 containerd[1984]: 2026-01-13 23:47:30.563 [INFO][5070] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-147' Jan 13 23:47:30.748627 containerd[1984]: 2026-01-13 23:47:30.593 [INFO][5070] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" host="ip-172-31-28-147" Jan 13 23:47:30.748627 containerd[1984]: 2026-01-13 23:47:30.601 [INFO][5070] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-147" Jan 13 23:47:30.748627 containerd[1984]: 2026-01-13 23:47:30.614 [INFO][5070] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:30.748627 containerd[1984]: 2026-01-13 23:47:30.620 [INFO][5070] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:30.748627 containerd[1984]: 2026-01-13 23:47:30.628 [INFO][5070] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:30.749898 containerd[1984]: 2026-01-13 23:47:30.628 [INFO][5070] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" host="ip-172-31-28-147" Jan 13 23:47:30.749898 containerd[1984]: 2026-01-13 23:47:30.632 [INFO][5070] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1 Jan 13 23:47:30.749898 containerd[1984]: 2026-01-13 23:47:30.651 [INFO][5070] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" host="ip-172-31-28-147" Jan 13 23:47:30.749898 containerd[1984]: 2026-01-13 23:47:30.673 [INFO][5070] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.197/26] block=192.168.14.192/26 handle="k8s-pod-network.738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" host="ip-172-31-28-147" Jan 13 23:47:30.749898 containerd[1984]: 2026-01-13 23:47:30.673 [INFO][5070] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.197/26] handle="k8s-pod-network.738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" host="ip-172-31-28-147" Jan 13 23:47:30.749898 containerd[1984]: 2026-01-13 23:47:30.673 [INFO][5070] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:47:30.749898 containerd[1984]: 2026-01-13 23:47:30.674 [INFO][5070] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.197/26] IPv6=[] ContainerID="738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" HandleID="k8s-pod-network.738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" Workload="ip--172--31--28--147-k8s-coredns--674b8bbfcf--tfxlt-eth0" Jan 13 23:47:30.750227 containerd[1984]: 2026-01-13 23:47:30.683 [INFO][4993] cni-plugin/k8s.go 418: Populated endpoint ContainerID="738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-tfxlt" WorkloadEndpoint="ip--172--31--28--147-k8s-coredns--674b8bbfcf--tfxlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--147-k8s-coredns--674b8bbfcf--tfxlt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cc252f99-0335-4f8d-b959-3bfc25386e2c", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 46, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-147", ContainerID:"", Pod:"coredns-674b8bbfcf-tfxlt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40577fdd299", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:47:30.750227 containerd[1984]: 2026-01-13 23:47:30.683 [INFO][4993] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.197/32] ContainerID="738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-tfxlt" WorkloadEndpoint="ip--172--31--28--147-k8s-coredns--674b8bbfcf--tfxlt-eth0" Jan 13 23:47:30.750227 containerd[1984]: 2026-01-13 23:47:30.683 [INFO][4993] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40577fdd299 ContainerID="738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-tfxlt" WorkloadEndpoint="ip--172--31--28--147-k8s-coredns--674b8bbfcf--tfxlt-eth0" Jan 13 23:47:30.750227 containerd[1984]: 2026-01-13 23:47:30.709 [INFO][4993] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-tfxlt" WorkloadEndpoint="ip--172--31--28--147-k8s-coredns--674b8bbfcf--tfxlt-eth0" Jan 13 23:47:30.750227 containerd[1984]: 2026-01-13 23:47:30.711 [INFO][4993] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-tfxlt" WorkloadEndpoint="ip--172--31--28--147-k8s-coredns--674b8bbfcf--tfxlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--147-k8s-coredns--674b8bbfcf--tfxlt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cc252f99-0335-4f8d-b959-3bfc25386e2c", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 46, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-147", ContainerID:"738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1", Pod:"coredns-674b8bbfcf-tfxlt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40577fdd299", MAC:"6e:61:ef:9b:2e:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:47:30.750227 containerd[1984]: 2026-01-13 23:47:30.739 [INFO][4993] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-tfxlt" WorkloadEndpoint="ip--172--31--28--147-k8s-coredns--674b8bbfcf--tfxlt-eth0" Jan 13 23:47:30.821583 containerd[1984]: time="2026-01-13T23:47:30.820559223Z" level=info msg="connecting to shim 738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1" address="unix:///run/containerd/s/8d5e3ad93e826251a1ee22e6d6757c1768906f5ce906aec60b5b384092d03e17" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:47:30.886551 systemd[1]: Started cri-containerd-738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1.scope - libcontainer container 738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1. Jan 13 23:47:30.897037 containerd[1984]: time="2026-01-13T23:47:30.896967699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d495df4b7-p456r,Uid:38c0b046-9ddd-4e0f-99d2-de8c0748710c,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:47:30.953954 systemd-networkd[1574]: cali22dd70a2d22: Gained IPv6LL Jan 13 23:47:30.965000 audit: BPF prog-id=227 op=LOAD Jan 13 23:47:30.966000 audit: BPF prog-id=228 op=LOAD Jan 13 23:47:30.966000 audit[5124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5112 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733386433353435643836616662663161666532663537666235396366 Jan 13 23:47:30.966000 audit: BPF prog-id=228 op=UNLOAD Jan 13 23:47:30.966000 audit[5124]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5112 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733386433353435643836616662663161666532663537666235396366 Jan 13 23:47:30.966000 audit: BPF prog-id=229 op=LOAD Jan 13 23:47:30.966000 audit[5124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5112 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733386433353435643836616662663161666532663537666235396366 Jan 13 23:47:30.966000 audit: BPF prog-id=230 op=LOAD Jan 13 23:47:30.966000 audit[5124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5112 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733386433353435643836616662663161666532663537666235396366 Jan 13 23:47:30.967000 audit: BPF prog-id=230 op=UNLOAD Jan 13 23:47:30.967000 audit[5124]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5112 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733386433353435643836616662663161666532663537666235396366 Jan 13 23:47:30.967000 audit: BPF prog-id=229 op=UNLOAD Jan 13 23:47:30.967000 audit[5124]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5112 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733386433353435643836616662663161666532663537666235396366 Jan 13 23:47:30.967000 audit: BPF prog-id=231 op=LOAD Jan 13 23:47:30.967000 audit[5124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5112 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733386433353435643836616662663161666532663537666235396366 Jan 13 23:47:30.995000 audit: BPF prog-id=232 op=LOAD Jan 13 23:47:30.995000 audit[5082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe110ad88 a2=40 a3=ffffe110adb8 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.995000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:30.995000 audit: BPF prog-id=232 op=UNLOAD Jan 13 23:47:30.995000 audit[5082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe110adb8 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:30.995000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:30.998306 containerd[1984]: time="2026-01-13T23:47:30.998260684Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:47:31.001370 containerd[1984]: time="2026-01-13T23:47:31.001150428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:47:31.001805 containerd[1984]: time="2026-01-13T23:47:31.001224852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:47:31.002200 kubelet[3522]: E0113 23:47:31.002121 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:47:31.002758 kubelet[3522]: E0113 23:47:31.002198 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:47:31.003163 kubelet[3522]: E0113 23:47:31.003065 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5g5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6fffm_calico-system(f27ddc53-e7e4-41b7-97d9-616c5339cc85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:47:31.004574 containerd[1984]: time="2026-01-13T23:47:31.003830172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:47:31.004837 kubelet[3522]: E0113 23:47:31.004760 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6fffm" podUID="f27ddc53-e7e4-41b7-97d9-616c5339cc85" Jan 13 23:47:31.059000 audit: BPF prog-id=233 op=LOAD Jan 13 23:47:31.059000 audit[5082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe110ad98 a2=94 a3=4 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.059000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:31.059000 audit: BPF prog-id=233 op=UNLOAD Jan 13 23:47:31.059000 audit[5082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.059000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:31.059000 audit: BPF prog-id=234 op=LOAD Jan 13 23:47:31.059000 audit[5082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe110abd8 a2=94 a3=5 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.059000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:31.059000 audit: BPF prog-id=234 op=UNLOAD Jan 13 23:47:31.059000 audit[5082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.059000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:31.059000 audit: BPF prog-id=235 op=LOAD Jan 13 23:47:31.059000 audit[5082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe110ae08 a2=94 a3=6 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.059000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:31.060000 audit: BPF prog-id=235 op=UNLOAD Jan 13 23:47:31.060000 audit[5082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.060000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:31.060000 audit: BPF prog-id=236 op=LOAD Jan 13 23:47:31.060000 audit[5082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe110a5d8 a2=94 a3=83 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.060000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:31.061000 audit: BPF prog-id=237 op=LOAD Jan 13 23:47:31.061000 audit[5082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe110a398 a2=94 a3=2 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.061000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:31.061000 audit: BPF prog-id=237 op=UNLOAD Jan 13 23:47:31.061000 audit[5082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.061000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:31.061000 audit: BPF prog-id=236 op=UNLOAD Jan 13 23:47:31.061000 audit[5082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=28a0c620 a3=289ffb00 items=0 ppid=4654 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.061000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:47:31.072000 audit: BPF prog-id=218 op=UNLOAD Jan 13 23:47:31.072000 audit[4654]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=400070f440 a2=0 a3=0 items=0 ppid=4644 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.072000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 13 23:47:31.082068 systemd-networkd[1574]: vxlan.calico: Gained IPv6LL Jan 13 23:47:31.091455 containerd[1984]: time="2026-01-13T23:47:31.090647016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tfxlt,Uid:cc252f99-0335-4f8d-b959-3bfc25386e2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1\"" Jan 13 23:47:31.107768 containerd[1984]: time="2026-01-13T23:47:31.107691336Z" level=info msg="CreateContainer within sandbox \"738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 23:47:31.143537 containerd[1984]: time="2026-01-13T23:47:31.143044981Z" level=info msg="Container 90aacfc09f76d11f407ee61cef4fc0317abf6fd6666cd769e634a53d26380380: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:47:31.179174 containerd[1984]: time="2026-01-13T23:47:31.178973029Z" level=info msg="CreateContainer within sandbox \"738d3545d86afbf1afe2f57fb59cf34bba8e3871924f7163210f200e7b1e52a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90aacfc09f76d11f407ee61cef4fc0317abf6fd6666cd769e634a53d26380380\"" Jan 13 23:47:31.182001 containerd[1984]: time="2026-01-13T23:47:31.181924453Z" level=info msg="StartContainer for \"90aacfc09f76d11f407ee61cef4fc0317abf6fd6666cd769e634a53d26380380\"" Jan 13 23:47:31.187685 containerd[1984]: time="2026-01-13T23:47:31.186892417Z" level=info msg="connecting to shim 90aacfc09f76d11f407ee61cef4fc0317abf6fd6666cd769e634a53d26380380" address="unix:///run/containerd/s/8d5e3ad93e826251a1ee22e6d6757c1768906f5ce906aec60b5b384092d03e17" protocol=ttrpc version=3 Jan 13 23:47:31.244076 systemd[1]: Started cri-containerd-90aacfc09f76d11f407ee61cef4fc0317abf6fd6666cd769e634a53d26380380.scope - libcontainer container 90aacfc09f76d11f407ee61cef4fc0317abf6fd6666cd769e634a53d26380380. Jan 13 23:47:31.298348 kubelet[3522]: E0113 23:47:31.297937 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:47:31.299012 containerd[1984]: time="2026-01-13T23:47:31.298133125Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:47:31.302488 kubelet[3522]: E0113 23:47:31.302435 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6fffm" podUID="f27ddc53-e7e4-41b7-97d9-616c5339cc85" Jan 13 23:47:31.303551 containerd[1984]: time="2026-01-13T23:47:31.303438085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:47:31.303917 containerd[1984]: time="2026-01-13T23:47:31.303856549Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:47:31.305734 kubelet[3522]: E0113 23:47:31.304452 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:47:31.305734 kubelet[3522]: E0113 23:47:31.305603 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:47:31.307188 kubelet[3522]: E0113 23:47:31.307101 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ptt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d495df4b7-2bldm_calico-apiserver(db4e6234-4e3a-4385-ad17-4564cf1a27b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:47:31.309068 kubelet[3522]: E0113 23:47:31.309007 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-2bldm" podUID="db4e6234-4e3a-4385-ad17-4564cf1a27b8" Jan 13 23:47:31.334000 audit: BPF prog-id=238 op=LOAD Jan 13 23:47:31.335000 audit: BPF prog-id=239 op=LOAD Jan 13 23:47:31.335000 audit[5174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5112 pid=5174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930616163666330396637366431316634303765653631636566346663 Jan 13 23:47:31.336000 audit: BPF prog-id=239 op=UNLOAD Jan 13 23:47:31.336000 audit[5174]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5112 pid=5174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930616163666330396637366431316634303765653631636566346663 Jan 13 23:47:31.338739 systemd-networkd[1574]: cali59e8bd6c8e1: Gained IPv6LL Jan 13 23:47:31.337000 audit: BPF prog-id=240 op=LOAD Jan 13 23:47:31.337000 audit[5174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5112 pid=5174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.337000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930616163666330396637366431316634303765653631636566346663 Jan 13 23:47:31.340000 audit: BPF prog-id=241 op=LOAD Jan 13 23:47:31.340000 audit[5174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5112 pid=5174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930616163666330396637366431316634303765653631636566346663 Jan 13 23:47:31.341000 audit: BPF prog-id=241 op=UNLOAD Jan 13 23:47:31.341000 audit[5174]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5112 pid=5174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930616163666330396637366431316634303765653631636566346663 Jan 13 23:47:31.343000 audit: BPF prog-id=240 op=UNLOAD Jan 13 23:47:31.343000 audit[5174]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5112 pid=5174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.343000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930616163666330396637366431316634303765653631636566346663 Jan 13 23:47:31.343000 audit: BPF prog-id=242 op=LOAD Jan 13 23:47:31.343000 audit[5174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5112 pid=5174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.343000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930616163666330396637366431316634303765653631636566346663 Jan 13 23:47:31.410445 systemd-networkd[1574]: cali154077af9cd: Link UP Jan 13 23:47:31.410942 systemd-networkd[1574]: cali154077af9cd: Gained carrier Jan 13 23:47:31.434796 containerd[1984]: time="2026-01-13T23:47:31.434743034Z" level=info msg="StartContainer for \"90aacfc09f76d11f407ee61cef4fc0317abf6fd6666cd769e634a53d26380380\" returns successfully" Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.072 [INFO][5145] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--p456r-eth0 calico-apiserver-5d495df4b7- calico-apiserver 38c0b046-9ddd-4e0f-99d2-de8c0748710c 883 0 2026-01-13 23:46:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d495df4b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-147 calico-apiserver-5d495df4b7-p456r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali154077af9cd [] [] }} ContainerID="772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" Namespace="calico-apiserver" Pod="calico-apiserver-5d495df4b7-p456r" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--p456r-" Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.072 [INFO][5145] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" Namespace="calico-apiserver" Pod="calico-apiserver-5d495df4b7-p456r" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--p456r-eth0" Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.206 [INFO][5166] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" HandleID="k8s-pod-network.772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" Workload="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--p456r-eth0" Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.208 [INFO][5166] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" HandleID="k8s-pod-network.772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" Workload="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--p456r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000339a30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-147", "pod":"calico-apiserver-5d495df4b7-p456r", "timestamp":"2026-01-13 23:47:31.206388661 +0000 UTC"}, Hostname:"ip-172-31-28-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.209 [INFO][5166] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.209 [INFO][5166] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.209 [INFO][5166] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-147' Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.251 [INFO][5166] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" host="ip-172-31-28-147" Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.269 [INFO][5166] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-147" Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.282 [INFO][5166] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.287 [INFO][5166] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.304 [INFO][5166] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.304 [INFO][5166] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" host="ip-172-31-28-147" Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.320 [INFO][5166] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6 Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.342 [INFO][5166] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" host="ip-172-31-28-147" Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.377 [INFO][5166] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.198/26] block=192.168.14.192/26 handle="k8s-pod-network.772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" host="ip-172-31-28-147" Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.378 [INFO][5166] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.198/26] handle="k8s-pod-network.772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" host="ip-172-31-28-147" Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.378 [INFO][5166] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:47:31.478527 containerd[1984]: 2026-01-13 23:47:31.378 [INFO][5166] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.198/26] IPv6=[] ContainerID="772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" HandleID="k8s-pod-network.772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" Workload="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--p456r-eth0" Jan 13 23:47:31.480250 containerd[1984]: 2026-01-13 23:47:31.387 [INFO][5145] cni-plugin/k8s.go 418: Populated endpoint ContainerID="772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" Namespace="calico-apiserver" Pod="calico-apiserver-5d495df4b7-p456r" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--p456r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--p456r-eth0", GenerateName:"calico-apiserver-5d495df4b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"38c0b046-9ddd-4e0f-99d2-de8c0748710c", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d495df4b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-147", ContainerID:"", Pod:"calico-apiserver-5d495df4b7-p456r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali154077af9cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:47:31.480250 containerd[1984]: 2026-01-13 23:47:31.388 [INFO][5145] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.198/32] ContainerID="772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" Namespace="calico-apiserver" Pod="calico-apiserver-5d495df4b7-p456r" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--p456r-eth0" Jan 13 23:47:31.480250 containerd[1984]: 2026-01-13 23:47:31.388 [INFO][5145] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali154077af9cd ContainerID="772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" Namespace="calico-apiserver" Pod="calico-apiserver-5d495df4b7-p456r" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--p456r-eth0" Jan 13 23:47:31.480250 containerd[1984]: 2026-01-13 23:47:31.415 [INFO][5145] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" Namespace="calico-apiserver" Pod="calico-apiserver-5d495df4b7-p456r" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--p456r-eth0" Jan 13 23:47:31.480250 containerd[1984]: 2026-01-13 23:47:31.417 [INFO][5145] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" Namespace="calico-apiserver" Pod="calico-apiserver-5d495df4b7-p456r" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--p456r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--p456r-eth0", GenerateName:"calico-apiserver-5d495df4b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"38c0b046-9ddd-4e0f-99d2-de8c0748710c", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d495df4b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-147", ContainerID:"772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6", Pod:"calico-apiserver-5d495df4b7-p456r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali154077af9cd", MAC:"ca:1e:d9:e6:d3:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:47:31.480250 containerd[1984]: 2026-01-13 23:47:31.473 [INFO][5145] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" Namespace="calico-apiserver" Pod="calico-apiserver-5d495df4b7-p456r" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--apiserver--5d495df4b7--p456r-eth0" Jan 13 23:47:31.529305 containerd[1984]: time="2026-01-13T23:47:31.528966579Z" level=info msg="connecting to shim 772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6" address="unix:///run/containerd/s/b6fccfdc1a750d1f50df2afb214dfd30f3881004ee0e2bc1ab39af68fc6d83b3" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:47:31.582000 audit[5245]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=5245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:31.582000 audit[5245]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd97921e0 a2=0 a3=1 items=0 ppid=3635 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:31.588000 audit[5245]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=5245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:31.588000 audit[5245]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd97921e0 a2=0 a3=1 items=0 ppid=3635 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.588000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:31.643044 systemd[1]: Started cri-containerd-772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6.scope - libcontainer container 772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6. Jan 13 23:47:31.721608 systemd-networkd[1574]: cali2de7a7e270c: Gained IPv6LL Jan 13 23:47:31.764000 audit[5267]: NETFILTER_CFG table=mangle:129 family=2 entries=16 op=nft_register_chain pid=5267 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:47:31.764000 audit[5267]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffff06faad0 a2=0 a3=ffffbbeb0fa8 items=0 ppid=4654 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.764000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:47:31.789000 audit[5266]: NETFILTER_CFG table=raw:130 family=2 entries=21 op=nft_register_chain pid=5266 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:47:31.789000 audit[5266]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffff3607540 a2=0 a3=ffffaf4bafa8 items=0 ppid=4654 pid=5266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.789000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:47:31.808000 audit: BPF prog-id=243 op=LOAD Jan 13 23:47:31.812000 audit: BPF prog-id=244 op=LOAD Jan 13 23:47:31.812000 audit[5244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=5231 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737326638343565353839316537663164666330383139653937396465 Jan 13 23:47:31.812000 audit: BPF prog-id=244 op=UNLOAD Jan 13 23:47:31.812000 audit[5244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5231 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737326638343565353839316537663164666330383139653937396465 Jan 13 23:47:31.812000 audit: BPF prog-id=245 op=LOAD Jan 13 23:47:31.812000 audit[5244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=5231 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737326638343565353839316537663164666330383139653937396465 Jan 13 23:47:31.812000 audit: BPF prog-id=246 op=LOAD Jan 13 23:47:31.812000 audit[5244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=5231 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737326638343565353839316537663164666330383139653937396465 Jan 13 23:47:31.812000 audit: BPF prog-id=246 op=UNLOAD Jan 13 23:47:31.812000 audit[5244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5231 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737326638343565353839316537663164666330383139653937396465 Jan 13 23:47:31.812000 audit: BPF prog-id=245 op=UNLOAD Jan 13 23:47:31.812000 audit[5244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5231 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737326638343565353839316537663164666330383139653937396465 Jan 13 23:47:31.812000 audit: BPF prog-id=247 op=LOAD Jan 13 23:47:31.812000 audit[5244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=5231 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737326638343565353839316537663164666330383139653937396465 Jan 13 23:47:31.817000 audit[5278]: NETFILTER_CFG table=nat:131 family=2 entries=15 op=nft_register_chain pid=5278 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:47:31.817000 audit[5278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc21c19a0 a2=0 a3=ffffa745efa8 items=0 ppid=4654 pid=5278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.817000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:47:31.899106 containerd[1984]: time="2026-01-13T23:47:31.898779988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d44f7875-phhfv,Uid:12f7eaa9-9d20-4e8f-9f20-d2118b28d17a,Namespace:calico-system,Attempt:0,}" Jan 13 23:47:31.901315 containerd[1984]: time="2026-01-13T23:47:31.901253536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6f6jh,Uid:fff694d2-51f0-4513-a81f-bedcae1438b9,Namespace:kube-system,Attempt:0,}" Jan 13 23:47:31.848000 audit[5280]: NETFILTER_CFG table=filter:132 family=2 entries=122 op=nft_register_chain pid=5280 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:47:31.848000 audit[5280]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=69792 a0=3 a1=ffffe7368f90 a2=0 a3=ffffa6a72fa8 items=0 ppid=4654 pid=5280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:31.848000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:47:32.325854 kubelet[3522]: E0113 23:47:32.325231 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-2bldm" podUID="db4e6234-4e3a-4385-ad17-4564cf1a27b8" Jan 13 23:47:32.336870 kubelet[3522]: E0113 23:47:32.336812 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6fffm" podUID="f27ddc53-e7e4-41b7-97d9-616c5339cc85" Jan 13 23:47:32.348234 containerd[1984]: time="2026-01-13T23:47:32.348050655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d495df4b7-p456r,Uid:38c0b046-9ddd-4e0f-99d2-de8c0748710c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"772f845e5891e7f1dfc0819e979de74a59a4e0d9bf6a5a74ec28fda83f1125d6\"" Jan 13 23:47:32.361288 containerd[1984]: time="2026-01-13T23:47:32.358978155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:47:32.361461 systemd-networkd[1574]: cali40577fdd299: Gained IPv6LL Jan 13 23:47:32.403000 audit[5337]: NETFILTER_CFG table=filter:133 family=2 entries=159 op=nft_register_chain pid=5337 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:47:32.403000 audit[5337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=93168 a0=3 a1=fffff969b830 a2=0 a3=ffff9e1c4fa8 items=0 ppid=4654 pid=5337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.403000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:47:32.493296 systemd-networkd[1574]: calif72c4320a12: Link UP Jan 13 23:47:32.498887 systemd-networkd[1574]: calif72c4320a12: Gained carrier Jan 13 23:47:32.511000 audit[5339]: NETFILTER_CFG table=filter:134 family=2 entries=20 op=nft_register_rule pid=5339 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:32.511000 audit[5339]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe7d7b780 a2=0 a3=1 items=0 ppid=3635 pid=5339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.511000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:32.523000 audit[5339]: NETFILTER_CFG table=nat:135 family=2 entries=14 op=nft_register_rule pid=5339 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:32.523000 audit[5339]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe7d7b780 a2=0 a3=1 items=0 ppid=3635 pid=5339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.523000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:32.537740 kubelet[3522]: I0113 23:47:32.537656 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tfxlt" podStartSLOduration=55.537628288 podStartE2EDuration="55.537628288s" podCreationTimestamp="2026-01-13 23:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:47:32.462426315 +0000 UTC m=+58.870135289" watchObservedRunningTime="2026-01-13 23:47:32.537628288 +0000 UTC m=+58.945337262" Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.099 [INFO][5286] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--147-k8s-calico--kube--controllers--86d44f7875--phhfv-eth0 calico-kube-controllers-86d44f7875- calico-system 12f7eaa9-9d20-4e8f-9f20-d2118b28d17a 881 0 2026-01-13 23:47:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86d44f7875 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-147 calico-kube-controllers-86d44f7875-phhfv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif72c4320a12 [] [] }} ContainerID="b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" Namespace="calico-system" Pod="calico-kube-controllers-86d44f7875-phhfv" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--kube--controllers--86d44f7875--phhfv-" Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.099 [INFO][5286] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" Namespace="calico-system" Pod="calico-kube-controllers-86d44f7875-phhfv" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--kube--controllers--86d44f7875--phhfv-eth0" Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.247 [INFO][5311] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" HandleID="k8s-pod-network.b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" Workload="ip--172--31--28--147-k8s-calico--kube--controllers--86d44f7875--phhfv-eth0" Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.248 [INFO][5311] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" HandleID="k8s-pod-network.b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" Workload="ip--172--31--28--147-k8s-calico--kube--controllers--86d44f7875--phhfv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000353ee0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-147", "pod":"calico-kube-controllers-86d44f7875-phhfv", "timestamp":"2026-01-13 23:47:32.247117382 +0000 UTC"}, Hostname:"ip-172-31-28-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.248 [INFO][5311] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.249 [INFO][5311] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.249 [INFO][5311] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-147' Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.298 [INFO][5311] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" host="ip-172-31-28-147" Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.333 [INFO][5311] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-147" Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.372 [INFO][5311] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.387 [INFO][5311] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.410 [INFO][5311] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.410 [INFO][5311] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" host="ip-172-31-28-147" Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.431 [INFO][5311] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3 Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.443 [INFO][5311] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" host="ip-172-31-28-147" Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.471 [INFO][5311] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.199/26] block=192.168.14.192/26 handle="k8s-pod-network.b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" host="ip-172-31-28-147" Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.471 [INFO][5311] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.199/26] handle="k8s-pod-network.b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" host="ip-172-31-28-147" Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.471 [INFO][5311] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:47:32.549561 containerd[1984]: 2026-01-13 23:47:32.472 [INFO][5311] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.199/26] IPv6=[] ContainerID="b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" HandleID="k8s-pod-network.b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" Workload="ip--172--31--28--147-k8s-calico--kube--controllers--86d44f7875--phhfv-eth0" Jan 13 23:47:32.550738 containerd[1984]: 2026-01-13 23:47:32.480 [INFO][5286] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" Namespace="calico-system" Pod="calico-kube-controllers-86d44f7875-phhfv" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--kube--controllers--86d44f7875--phhfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--147-k8s-calico--kube--controllers--86d44f7875--phhfv-eth0", GenerateName:"calico-kube-controllers-86d44f7875-", Namespace:"calico-system", SelfLink:"", UID:"12f7eaa9-9d20-4e8f-9f20-d2118b28d17a", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 47, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d44f7875", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-147", ContainerID:"", Pod:"calico-kube-controllers-86d44f7875-phhfv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif72c4320a12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:47:32.550738 containerd[1984]: 2026-01-13 23:47:32.480 [INFO][5286] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.199/32] ContainerID="b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" Namespace="calico-system" Pod="calico-kube-controllers-86d44f7875-phhfv" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--kube--controllers--86d44f7875--phhfv-eth0" Jan 13 23:47:32.550738 containerd[1984]: 2026-01-13 23:47:32.480 [INFO][5286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif72c4320a12 ContainerID="b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" Namespace="calico-system" Pod="calico-kube-controllers-86d44f7875-phhfv" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--kube--controllers--86d44f7875--phhfv-eth0" Jan 13 23:47:32.550738 containerd[1984]: 2026-01-13 23:47:32.507 [INFO][5286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" Namespace="calico-system" Pod="calico-kube-controllers-86d44f7875-phhfv" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--kube--controllers--86d44f7875--phhfv-eth0" Jan 13 23:47:32.550738 containerd[1984]: 2026-01-13 23:47:32.508 [INFO][5286] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" Namespace="calico-system" Pod="calico-kube-controllers-86d44f7875-phhfv" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--kube--controllers--86d44f7875--phhfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--147-k8s-calico--kube--controllers--86d44f7875--phhfv-eth0", GenerateName:"calico-kube-controllers-86d44f7875-", Namespace:"calico-system", SelfLink:"", UID:"12f7eaa9-9d20-4e8f-9f20-d2118b28d17a", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 47, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d44f7875", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-147", ContainerID:"b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3", Pod:"calico-kube-controllers-86d44f7875-phhfv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif72c4320a12", MAC:"9a:6a:82:e3:2f:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:47:32.550738 containerd[1984]: 2026-01-13 23:47:32.538 [INFO][5286] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" Namespace="calico-system" Pod="calico-kube-controllers-86d44f7875-phhfv" WorkloadEndpoint="ip--172--31--28--147-k8s-calico--kube--controllers--86d44f7875--phhfv-eth0" Jan 13 23:47:32.632014 containerd[1984]: time="2026-01-13T23:47:32.631715344Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:47:32.633942 containerd[1984]: time="2026-01-13T23:47:32.633881272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:47:32.634242 containerd[1984]: time="2026-01-13T23:47:32.634120372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:47:32.635691 kubelet[3522]: E0113 23:47:32.634764 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:47:32.635691 kubelet[3522]: E0113 23:47:32.635559 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:47:32.638387 kubelet[3522]: E0113 23:47:32.637199 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9hmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d495df4b7-p456r_calico-apiserver(38c0b046-9ddd-4e0f-99d2-de8c0748710c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:47:32.639711 kubelet[3522]: E0113 23:47:32.639489 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-p456r" podUID="38c0b046-9ddd-4e0f-99d2-de8c0748710c" Jan 13 23:47:32.650650 systemd-networkd[1574]: cali98a1dab838d: Link UP Jan 13 23:47:32.653532 containerd[1984]: time="2026-01-13T23:47:32.653206648Z" level=info msg="connecting to shim b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3" address="unix:///run/containerd/s/253a1dada898d599a8db3ab2e473fadd877fbc2aa82dbe4cc55e945fdd5d67e4" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:47:32.655785 systemd-networkd[1574]: cali98a1dab838d: Gained carrier Jan 13 23:47:32.643000 audit[5349]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=5349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:32.643000 audit[5349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd7706ae0 a2=0 a3=1 items=0 ppid=3635 pid=5349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.643000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:32.678000 audit[5349]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=5349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:32.678000 audit[5349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd7706ae0 a2=0 a3=1 items=0 ppid=3635 pid=5349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.159 [INFO][5288] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--147-k8s-coredns--674b8bbfcf--6f6jh-eth0 coredns-674b8bbfcf- kube-system fff694d2-51f0-4513-a81f-bedcae1438b9 880 0 2026-01-13 23:46:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-147 coredns-674b8bbfcf-6f6jh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali98a1dab838d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" Namespace="kube-system" Pod="coredns-674b8bbfcf-6f6jh" WorkloadEndpoint="ip--172--31--28--147-k8s-coredns--674b8bbfcf--6f6jh-" Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.162 [INFO][5288] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" Namespace="kube-system" Pod="coredns-674b8bbfcf-6f6jh" WorkloadEndpoint="ip--172--31--28--147-k8s-coredns--674b8bbfcf--6f6jh-eth0" Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.279 [INFO][5319] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" HandleID="k8s-pod-network.c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" Workload="ip--172--31--28--147-k8s-coredns--674b8bbfcf--6f6jh-eth0" Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.283 [INFO][5319] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" HandleID="k8s-pod-network.c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" Workload="ip--172--31--28--147-k8s-coredns--674b8bbfcf--6f6jh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000239950), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-147", "pod":"coredns-674b8bbfcf-6f6jh", "timestamp":"2026-01-13 23:47:32.279923354 +0000 UTC"}, Hostname:"ip-172-31-28-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.283 [INFO][5319] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.472 [INFO][5319] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.472 [INFO][5319] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-147' Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.527 [INFO][5319] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" host="ip-172-31-28-147" Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.544 [INFO][5319] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-147" Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.564 [INFO][5319] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.569 [INFO][5319] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.575 [INFO][5319] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-28-147" Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.576 [INFO][5319] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" host="ip-172-31-28-147" Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.579 [INFO][5319] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5 Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.594 [INFO][5319] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" host="ip-172-31-28-147" Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.621 [INFO][5319] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.200/26] block=192.168.14.192/26 handle="k8s-pod-network.c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" host="ip-172-31-28-147" Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.621 [INFO][5319] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.200/26] handle="k8s-pod-network.c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" host="ip-172-31-28-147" Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.621 [INFO][5319] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:47:32.724524 containerd[1984]: 2026-01-13 23:47:32.621 [INFO][5319] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.200/26] IPv6=[] ContainerID="c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" HandleID="k8s-pod-network.c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" Workload="ip--172--31--28--147-k8s-coredns--674b8bbfcf--6f6jh-eth0" Jan 13 23:47:32.726358 containerd[1984]: 2026-01-13 23:47:32.628 [INFO][5288] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" Namespace="kube-system" Pod="coredns-674b8bbfcf-6f6jh" WorkloadEndpoint="ip--172--31--28--147-k8s-coredns--674b8bbfcf--6f6jh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--147-k8s-coredns--674b8bbfcf--6f6jh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fff694d2-51f0-4513-a81f-bedcae1438b9", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 46, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-147", ContainerID:"", Pod:"coredns-674b8bbfcf-6f6jh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali98a1dab838d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:47:32.726358 containerd[1984]: 2026-01-13 23:47:32.629 [INFO][5288] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.200/32] ContainerID="c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" Namespace="kube-system" Pod="coredns-674b8bbfcf-6f6jh" WorkloadEndpoint="ip--172--31--28--147-k8s-coredns--674b8bbfcf--6f6jh-eth0" Jan 13 23:47:32.726358 containerd[1984]: 2026-01-13 23:47:32.634 [INFO][5288] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98a1dab838d ContainerID="c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" Namespace="kube-system" Pod="coredns-674b8bbfcf-6f6jh" WorkloadEndpoint="ip--172--31--28--147-k8s-coredns--674b8bbfcf--6f6jh-eth0" Jan 13 23:47:32.726358 containerd[1984]: 2026-01-13 23:47:32.657 [INFO][5288] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" Namespace="kube-system" Pod="coredns-674b8bbfcf-6f6jh" WorkloadEndpoint="ip--172--31--28--147-k8s-coredns--674b8bbfcf--6f6jh-eth0" Jan 13 23:47:32.726358 containerd[1984]: 2026-01-13 23:47:32.662 [INFO][5288] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" Namespace="kube-system" Pod="coredns-674b8bbfcf-6f6jh" WorkloadEndpoint="ip--172--31--28--147-k8s-coredns--674b8bbfcf--6f6jh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--147-k8s-coredns--674b8bbfcf--6f6jh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fff694d2-51f0-4513-a81f-bedcae1438b9", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 46, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-147", ContainerID:"c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5", Pod:"coredns-674b8bbfcf-6f6jh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali98a1dab838d", MAC:"da:3b:eb:20:d3:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:47:32.726358 containerd[1984]: 2026-01-13 23:47:32.712 [INFO][5288] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" Namespace="kube-system" Pod="coredns-674b8bbfcf-6f6jh" WorkloadEndpoint="ip--172--31--28--147-k8s-coredns--674b8bbfcf--6f6jh-eth0" Jan 13 23:47:32.784316 systemd[1]: Started cri-containerd-b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3.scope - libcontainer container b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3. Jan 13 23:47:32.832054 containerd[1984]: time="2026-01-13T23:47:32.831954869Z" level=info msg="connecting to shim c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5" address="unix:///run/containerd/s/00dbda605f443a2b7fbf5629bb8af26c11509f8619688575f1bf53db21a163af" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:47:32.842000 audit[5403]: NETFILTER_CFG table=filter:138 family=2 entries=52 op=nft_register_chain pid=5403 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:47:32.842000 audit[5403]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24312 a0=3 a1=ffffe8d5e1a0 a2=0 a3=ffffb4f7efa8 items=0 ppid=4654 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.842000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:47:32.929984 systemd[1]: Started cri-containerd-c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5.scope - libcontainer container c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5. Jan 13 23:47:32.954000 audit: BPF prog-id=248 op=LOAD Jan 13 23:47:32.958000 audit: BPF prog-id=249 op=LOAD Jan 13 23:47:32.958000 audit[5372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5361 pid=5372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238613634356330343235646166623138313936366130646239643661 Jan 13 23:47:32.959000 audit: BPF prog-id=249 op=UNLOAD Jan 13 23:47:32.959000 audit[5372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5361 pid=5372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238613634356330343235646166623138313936366130646239643661 Jan 13 23:47:32.961000 audit: BPF prog-id=250 op=LOAD Jan 13 23:47:32.961000 audit[5372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5361 pid=5372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238613634356330343235646166623138313936366130646239643661 Jan 13 23:47:32.961000 audit: BPF prog-id=251 op=LOAD Jan 13 23:47:32.961000 audit[5372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5361 pid=5372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238613634356330343235646166623138313936366130646239643661 Jan 13 23:47:32.961000 audit: BPF prog-id=251 op=UNLOAD Jan 13 23:47:32.961000 audit[5372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5361 pid=5372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238613634356330343235646166623138313936366130646239643661 Jan 13 23:47:32.961000 audit: BPF prog-id=250 op=UNLOAD Jan 13 23:47:32.961000 audit[5372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5361 pid=5372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238613634356330343235646166623138313936366130646239643661 Jan 13 23:47:32.961000 audit: BPF prog-id=252 op=LOAD Jan 13 23:47:32.961000 audit[5372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5361 pid=5372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238613634356330343235646166623138313936366130646239643661 Jan 13 23:47:32.984000 audit: BPF prog-id=253 op=LOAD Jan 13 23:47:32.985000 audit: BPF prog-id=254 op=LOAD Jan 13 23:47:32.985000 audit[5417]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=5399 pid=5417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335336536336231613132653731666637363939653037626665316634 Jan 13 23:47:32.986000 audit: BPF prog-id=254 op=UNLOAD Jan 13 23:47:32.986000 audit[5417]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5399 pid=5417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.986000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335336536336231613132653731666637363939653037626665316634 Jan 13 23:47:32.987000 audit: BPF prog-id=255 op=LOAD Jan 13 23:47:32.987000 audit[5417]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=5399 pid=5417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335336536336231613132653731666637363939653037626665316634 Jan 13 23:47:32.988000 audit: BPF prog-id=256 op=LOAD Jan 13 23:47:32.988000 audit[5417]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=5399 pid=5417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335336536336231613132653731666637363939653037626665316634 Jan 13 23:47:32.989000 audit: BPF prog-id=256 op=UNLOAD Jan 13 23:47:32.989000 audit[5417]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5399 pid=5417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335336536336231613132653731666637363939653037626665316634 Jan 13 23:47:32.990000 audit: BPF prog-id=255 op=UNLOAD Jan 13 23:47:32.990000 audit[5417]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5399 pid=5417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335336536336231613132653731666637363939653037626665316634 Jan 13 23:47:32.991000 audit: BPF prog-id=257 op=LOAD Jan 13 23:47:32.991000 audit[5417]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=5399 pid=5417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.991000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335336536336231613132653731666637363939653037626665316634 Jan 13 23:47:32.992000 audit[5432]: NETFILTER_CFG table=filter:139 family=2 entries=52 op=nft_register_chain pid=5432 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:47:32.992000 audit[5432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23892 a0=3 a1=ffffcca77f10 a2=0 a3=ffff885f0fa8 items=0 ppid=4654 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:32.992000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:47:33.085428 containerd[1984]: time="2026-01-13T23:47:33.085348298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6f6jh,Uid:fff694d2-51f0-4513-a81f-bedcae1438b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5\"" Jan 13 23:47:33.100048 containerd[1984]: time="2026-01-13T23:47:33.099973202Z" level=info msg="CreateContainer within sandbox \"c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 23:47:33.138532 containerd[1984]: time="2026-01-13T23:47:33.137833875Z" level=info msg="Container 3e298a6ef28218a380f75e5ea4319b81ff102b41644d5ee7e00bf51bde109212: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:47:33.178096 containerd[1984]: time="2026-01-13T23:47:33.177963459Z" level=info msg="CreateContainer within sandbox \"c53e63b1a12e71ff7699e07bfe1f497e705d2ee7716f4ba5c1851e2430e2beb5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e298a6ef28218a380f75e5ea4319b81ff102b41644d5ee7e00bf51bde109212\"" Jan 13 23:47:33.183736 containerd[1984]: time="2026-01-13T23:47:33.182954019Z" level=info msg="StartContainer for \"3e298a6ef28218a380f75e5ea4319b81ff102b41644d5ee7e00bf51bde109212\"" Jan 13 23:47:33.197465 containerd[1984]: time="2026-01-13T23:47:33.197201355Z" level=info msg="connecting to shim 3e298a6ef28218a380f75e5ea4319b81ff102b41644d5ee7e00bf51bde109212" address="unix:///run/containerd/s/00dbda605f443a2b7fbf5629bb8af26c11509f8619688575f1bf53db21a163af" protocol=ttrpc version=3 Jan 13 23:47:33.200146 containerd[1984]: time="2026-01-13T23:47:33.200055579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d44f7875-phhfv,Uid:12f7eaa9-9d20-4e8f-9f20-d2118b28d17a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8a645c0425dafb181966a0db9d6a082dfe42e9308f7605ae3f2f2521b66d6a3\"" Jan 13 23:47:33.205630 containerd[1984]: time="2026-01-13T23:47:33.205400463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:47:33.260169 systemd-networkd[1574]: cali154077af9cd: Gained IPv6LL Jan 13 23:47:33.269918 systemd[1]: Started cri-containerd-3e298a6ef28218a380f75e5ea4319b81ff102b41644d5ee7e00bf51bde109212.scope - libcontainer container 3e298a6ef28218a380f75e5ea4319b81ff102b41644d5ee7e00bf51bde109212. Jan 13 23:47:33.306000 audit: BPF prog-id=258 op=LOAD Jan 13 23:47:33.307000 audit: BPF prog-id=259 op=LOAD Jan 13 23:47:33.307000 audit[5455]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5399 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:33.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323938613665663238323138613338306637356535656134333139 Jan 13 23:47:33.308000 audit: BPF prog-id=259 op=UNLOAD Jan 13 23:47:33.308000 audit[5455]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5399 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:33.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323938613665663238323138613338306637356535656134333139 Jan 13 23:47:33.309000 audit: BPF prog-id=260 op=LOAD Jan 13 23:47:33.309000 audit[5455]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5399 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:33.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323938613665663238323138613338306637356535656134333139 Jan 13 23:47:33.309000 audit: BPF prog-id=261 op=LOAD Jan 13 23:47:33.309000 audit[5455]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5399 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:33.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323938613665663238323138613338306637356535656134333139 Jan 13 23:47:33.309000 audit: BPF prog-id=261 op=UNLOAD Jan 13 23:47:33.309000 audit[5455]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5399 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:33.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323938613665663238323138613338306637356535656134333139 Jan 13 23:47:33.309000 audit: BPF prog-id=260 op=UNLOAD Jan 13 23:47:33.309000 audit[5455]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5399 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:33.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323938613665663238323138613338306637356535656134333139 Jan 13 23:47:33.309000 audit: BPF prog-id=262 op=LOAD Jan 13 23:47:33.309000 audit[5455]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5399 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:33.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323938613665663238323138613338306637356535656134333139 Jan 13 23:47:33.341124 kubelet[3522]: E0113 23:47:33.338393 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-p456r" podUID="38c0b046-9ddd-4e0f-99d2-de8c0748710c" Jan 13 23:47:33.382643 containerd[1984]: time="2026-01-13T23:47:33.382493692Z" level=info msg="StartContainer for \"3e298a6ef28218a380f75e5ea4319b81ff102b41644d5ee7e00bf51bde109212\" returns successfully" Jan 13 23:47:33.521470 containerd[1984]: time="2026-01-13T23:47:33.521281984Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:47:33.523720 containerd[1984]: time="2026-01-13T23:47:33.523641580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:47:33.523851 containerd[1984]: time="2026-01-13T23:47:33.523788796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:47:33.524198 kubelet[3522]: E0113 23:47:33.524144 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:47:33.524369 kubelet[3522]: E0113 23:47:33.524338 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:47:33.525405 kubelet[3522]: E0113 23:47:33.525281 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86d44f7875-phhfv_calico-system(12f7eaa9-9d20-4e8f-9f20-d2118b28d17a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:47:33.526689 kubelet[3522]: E0113 23:47:33.526618 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d44f7875-phhfv" podUID="12f7eaa9-9d20-4e8f-9f20-d2118b28d17a" Jan 13 23:47:33.768000 audit[5488]: NETFILTER_CFG table=filter:140 family=2 entries=17 op=nft_register_rule pid=5488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:33.768000 audit[5488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffff318220 a2=0 a3=1 items=0 ppid=3635 pid=5488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:33.768000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:33.778000 audit[5488]: NETFILTER_CFG table=nat:141 family=2 entries=35 op=nft_register_chain pid=5488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:33.778000 audit[5488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffff318220 a2=0 a3=1 items=0 ppid=3635 pid=5488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:33.778000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:33.898545 systemd-networkd[1574]: calif72c4320a12: Gained IPv6LL Jan 13 23:47:34.362546 kubelet[3522]: E0113 23:47:34.361773 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-p456r" podUID="38c0b046-9ddd-4e0f-99d2-de8c0748710c" Jan 13 23:47:34.362546 kubelet[3522]: E0113 23:47:34.362457 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d44f7875-phhfv" podUID="12f7eaa9-9d20-4e8f-9f20-d2118b28d17a" Jan 13 23:47:34.430927 kubelet[3522]: I0113 23:47:34.430473 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6f6jh" podStartSLOduration=57.430425305 podStartE2EDuration="57.430425305s" podCreationTimestamp="2026-01-13 23:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:47:34.428290361 +0000 UTC m=+60.835999383" watchObservedRunningTime="2026-01-13 23:47:34.430425305 +0000 UTC m=+60.838134315" Jan 13 23:47:34.489000 audit[5494]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:34.489000 audit[5494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdb1e7110 a2=0 a3=1 items=0 ppid=3635 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:34.489000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:34.540000 audit[5494]: NETFILTER_CFG table=nat:143 family=2 entries=56 op=nft_register_chain pid=5494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:47:34.540000 audit[5494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffdb1e7110 a2=0 a3=1 items=0 ppid=3635 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:34.540000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:47:34.664859 systemd-networkd[1574]: cali98a1dab838d: Gained IPv6LL Jan 13 23:47:37.227282 ntpd[1939]: Listen normally on 6 vxlan.calico 192.168.14.192:123 Jan 13 23:47:37.227392 ntpd[1939]: Listen normally on 7 caliefd94b02907 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 13 23:47:37.227927 ntpd[1939]: 13 Jan 23:47:37 ntpd[1939]: Listen normally on 6 vxlan.calico 192.168.14.192:123 Jan 13 23:47:37.227927 ntpd[1939]: 13 Jan 23:47:37 ntpd[1939]: Listen normally on 7 caliefd94b02907 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 13 23:47:37.227927 ntpd[1939]: 13 Jan 23:47:37 ntpd[1939]: Listen normally on 8 cali22dd70a2d22 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 13 23:47:37.227927 ntpd[1939]: 13 Jan 23:47:37 ntpd[1939]: Listen normally on 9 cali59e8bd6c8e1 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 13 23:47:37.227927 ntpd[1939]: 13 Jan 23:47:37 ntpd[1939]: Listen normally on 10 cali2de7a7e270c [fe80::ecee:eeff:feee:eeee%7]:123 Jan 13 23:47:37.227927 ntpd[1939]: 13 Jan 23:47:37 ntpd[1939]: Listen normally on 11 vxlan.calico [fe80::64cd:6fff:fe34:f0a9%8]:123 Jan 13 23:47:37.227927 ntpd[1939]: 13 Jan 23:47:37 ntpd[1939]: Listen normally on 12 cali40577fdd299 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 13 23:47:37.227927 ntpd[1939]: 13 Jan 23:47:37 ntpd[1939]: Listen normally on 13 cali154077af9cd [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 23:47:37.227927 ntpd[1939]: 13 Jan 23:47:37 ntpd[1939]: Listen normally on 14 calif72c4320a12 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 13 23:47:37.227444 ntpd[1939]: Listen normally on 8 cali22dd70a2d22 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 13 23:47:37.228841 ntpd[1939]: 13 Jan 23:47:37 ntpd[1939]: Listen normally on 15 cali98a1dab838d [fe80::ecee:eeff:feee:eeee%14]:123 Jan 13 23:47:37.227492 ntpd[1939]: Listen normally on 9 cali59e8bd6c8e1 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 13 23:47:37.227635 ntpd[1939]: Listen normally on 10 cali2de7a7e270c [fe80::ecee:eeff:feee:eeee%7]:123 Jan 13 23:47:37.227687 ntpd[1939]: Listen normally on 11 vxlan.calico [fe80::64cd:6fff:fe34:f0a9%8]:123 Jan 13 23:47:37.227733 ntpd[1939]: Listen normally on 12 cali40577fdd299 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 13 23:47:37.227785 ntpd[1939]: Listen normally on 13 cali154077af9cd [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 23:47:37.227870 ntpd[1939]: Listen normally on 14 calif72c4320a12 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 13 23:47:37.227949 ntpd[1939]: Listen normally on 15 cali98a1dab838d [fe80::ecee:eeff:feee:eeee%14]:123 Jan 13 23:47:40.978951 systemd[1]: Started sshd@7-172.31.28.147:22-68.220.241.50:41060.service - OpenSSH per-connection server daemon (68.220.241.50:41060). Jan 13 23:47:40.982535 kernel: kauditd_printk_skb: 319 callbacks suppressed Jan 13 23:47:40.982675 kernel: audit: type=1130 audit(1768348060.978:764): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.28.147:22-68.220.241.50:41060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:40.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.28.147:22-68.220.241.50:41060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.508000 audit[5510]: USER_ACCT pid=5510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:41.510285 sshd[5510]: Accepted publickey for core from 68.220.241.50 port 41060 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:47:41.517544 kernel: audit: type=1101 audit(1768348061.508:765): pid=5510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:41.516000 audit[5510]: CRED_ACQ pid=5510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:41.520324 sshd-session[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:47:41.528833 kernel: audit: type=1103 audit(1768348061.516:766): pid=5510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:41.528982 kernel: audit: type=1006 audit(1768348061.517:767): pid=5510 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 13 23:47:41.517000 audit[5510]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffcf82fe0 a2=3 a3=0 items=0 ppid=1 pid=5510 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:41.535431 kernel: audit: type=1300 audit(1768348061.517:767): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffcf82fe0 a2=3 a3=0 items=0 ppid=1 pid=5510 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:41.517000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:47:41.538757 kernel: audit: type=1327 audit(1768348061.517:767): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:47:41.544869 systemd-logind[1945]: New session 9 of user core. Jan 13 23:47:41.553812 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 23:47:41.560000 audit[5510]: USER_START pid=5510 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:41.564000 audit[5514]: CRED_ACQ pid=5514 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:41.574637 kernel: audit: type=1105 audit(1768348061.560:768): pid=5510 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:41.574713 kernel: audit: type=1103 audit(1768348061.564:769): pid=5514 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:41.977221 sshd[5514]: Connection closed by 68.220.241.50 port 41060 Jan 13 23:47:41.978090 sshd-session[5510]: pam_unix(sshd:session): session closed for user core Jan 13 23:47:41.981000 audit[5510]: USER_END pid=5510 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:41.989166 systemd[1]: sshd@7-172.31.28.147:22-68.220.241.50:41060.service: Deactivated successfully. Jan 13 23:47:41.991995 kernel: audit: type=1106 audit(1768348061.981:770): pid=5510 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:41.992133 kernel: audit: type=1104 audit(1768348061.982:771): pid=5510 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:41.982000 audit[5510]: CRED_DISP pid=5510 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:41.995621 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 23:47:41.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.28.147:22-68.220.241.50:41060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.001240 systemd-logind[1945]: Session 9 logged out. Waiting for processes to exit. Jan 13 23:47:42.006035 systemd-logind[1945]: Removed session 9. Jan 13 23:47:42.897825 containerd[1984]: time="2026-01-13T23:47:42.897540867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:47:43.164068 containerd[1984]: time="2026-01-13T23:47:43.163911000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:47:43.166371 containerd[1984]: time="2026-01-13T23:47:43.166294044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:47:43.166538 containerd[1984]: time="2026-01-13T23:47:43.166419720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:47:43.166702 kubelet[3522]: E0113 23:47:43.166653 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:47:43.169108 kubelet[3522]: E0113 23:47:43.166711 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:47:43.169108 kubelet[3522]: E0113 23:47:43.167036 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d29b182626494232aa116abb84d9adc5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-62ft2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74c798dd46-h9tcx_calico-system(d4afc04a-ba35-4c2d-9726-5e2240fe2e11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:47:43.170711 containerd[1984]: time="2026-01-13T23:47:43.167479356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:47:43.452294 containerd[1984]: time="2026-01-13T23:47:43.452006474Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:47:43.454468 containerd[1984]: time="2026-01-13T23:47:43.454288982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:47:43.454468 containerd[1984]: time="2026-01-13T23:47:43.454405622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:47:43.454848 kubelet[3522]: E0113 23:47:43.454737 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:47:43.454848 kubelet[3522]: E0113 23:47:43.454808 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:47:43.455257 containerd[1984]: time="2026-01-13T23:47:43.455183654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:47:43.455808 kubelet[3522]: E0113 23:47:43.455628 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ptt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d495df4b7-2bldm_calico-apiserver(db4e6234-4e3a-4385-ad17-4564cf1a27b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:47:43.457808 kubelet[3522]: E0113 23:47:43.457731 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-2bldm" podUID="db4e6234-4e3a-4385-ad17-4564cf1a27b8" Jan 13 23:47:43.758763 containerd[1984]: time="2026-01-13T23:47:43.758597355Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:47:43.760992 containerd[1984]: time="2026-01-13T23:47:43.760921359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:47:43.761150 containerd[1984]: time="2026-01-13T23:47:43.761048811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:47:43.761436 kubelet[3522]: E0113 23:47:43.761380 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:47:43.761548 kubelet[3522]: E0113 23:47:43.761449 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:47:43.761777 kubelet[3522]: E0113 23:47:43.761642 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62ft2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74c798dd46-h9tcx_calico-system(d4afc04a-ba35-4c2d-9726-5e2240fe2e11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:47:43.763534 kubelet[3522]: E0113 23:47:43.763434 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74c798dd46-h9tcx" podUID="d4afc04a-ba35-4c2d-9726-5e2240fe2e11" Jan 13 23:47:44.899693 containerd[1984]: time="2026-01-13T23:47:44.899264249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:47:45.151630 containerd[1984]: time="2026-01-13T23:47:45.151371938Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:47:45.154287 containerd[1984]: time="2026-01-13T23:47:45.154160606Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:47:45.154287 containerd[1984]: time="2026-01-13T23:47:45.154253258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:47:45.154702 kubelet[3522]: E0113 23:47:45.154461 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:47:45.154702 kubelet[3522]: E0113 23:47:45.154543 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:47:45.155242 kubelet[3522]: E0113 23:47:45.154805 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbgsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t67lh_calico-system(62188975-46ce-424e-8434-9b05ea3b2915): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:47:45.156309 containerd[1984]: time="2026-01-13T23:47:45.156229202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:47:45.411582 containerd[1984]: time="2026-01-13T23:47:45.411400695Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:47:45.413757 containerd[1984]: time="2026-01-13T23:47:45.413684704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:47:45.413876 containerd[1984]: time="2026-01-13T23:47:45.413805004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:47:45.414119 kubelet[3522]: E0113 23:47:45.414045 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:47:45.414119 kubelet[3522]: E0113 23:47:45.414115 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:47:45.414640 kubelet[3522]: E0113 23:47:45.414474 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5g5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6fffm_calico-system(f27ddc53-e7e4-41b7-97d9-616c5339cc85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:47:45.415019 containerd[1984]: time="2026-01-13T23:47:45.414827404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:47:45.416033 kubelet[3522]: E0113 23:47:45.415935 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6fffm" podUID="f27ddc53-e7e4-41b7-97d9-616c5339cc85" Jan 13 23:47:45.709479 containerd[1984]: time="2026-01-13T23:47:45.709173809Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:47:45.712376 containerd[1984]: time="2026-01-13T23:47:45.711601421Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:47:45.712604 containerd[1984]: time="2026-01-13T23:47:45.712289549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:47:45.713080 kubelet[3522]: E0113 23:47:45.712986 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:47:45.713252 kubelet[3522]: E0113 23:47:45.713185 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:47:45.713665 kubelet[3522]: E0113 23:47:45.713497 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbgsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t67lh_calico-system(62188975-46ce-424e-8434-9b05ea3b2915): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:47:45.715116 kubelet[3522]: E0113 23:47:45.715030 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:47:47.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.28.147:22-68.220.241.50:49952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:47.074197 systemd[1]: Started sshd@8-172.31.28.147:22-68.220.241.50:49952.service - OpenSSH per-connection server daemon (68.220.241.50:49952). Jan 13 23:47:47.077810 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:47:47.077871 kernel: audit: type=1130 audit(1768348067.072:773): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.28.147:22-68.220.241.50:49952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:47.548000 audit[5541]: USER_ACCT pid=5541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:47.551396 sshd[5541]: Accepted publickey for core from 68.220.241.50 port 49952 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:47:47.557000 audit[5541]: CRED_ACQ pid=5541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:47.562156 sshd-session[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:47:47.564955 kernel: audit: type=1101 audit(1768348067.548:774): pid=5541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:47.565048 kernel: audit: type=1103 audit(1768348067.557:775): pid=5541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:47.568649 kernel: audit: type=1006 audit(1768348067.557:776): pid=5541 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 13 23:47:47.557000 audit[5541]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff69c4eb0 a2=3 a3=0 items=0 ppid=1 pid=5541 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:47.576989 kernel: audit: type=1300 audit(1768348067.557:776): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff69c4eb0 a2=3 a3=0 items=0 ppid=1 pid=5541 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:47.557000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:47:47.579847 kernel: audit: type=1327 audit(1768348067.557:776): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:47:47.585316 systemd-logind[1945]: New session 10 of user core. Jan 13 23:47:47.596893 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 23:47:47.601000 audit[5541]: USER_START pid=5541 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:47.609000 audit[5545]: CRED_ACQ pid=5545 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:47.616690 kernel: audit: type=1105 audit(1768348067.601:777): pid=5541 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:47.616814 kernel: audit: type=1103 audit(1768348067.609:778): pid=5545 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:47.900981 containerd[1984]: time="2026-01-13T23:47:47.900610436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:47:47.917542 sshd[5545]: Connection closed by 68.220.241.50 port 49952 Jan 13 23:47:47.918328 sshd-session[5541]: pam_unix(sshd:session): session closed for user core Jan 13 23:47:47.921000 audit[5541]: USER_END pid=5541 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:47.937735 kernel: audit: type=1106 audit(1768348067.921:779): pid=5541 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:47.937861 kernel: audit: type=1104 audit(1768348067.921:780): pid=5541 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:47.921000 audit[5541]: CRED_DISP pid=5541 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:47.933824 systemd[1]: sshd@8-172.31.28.147:22-68.220.241.50:49952.service: Deactivated successfully. Jan 13 23:47:47.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.28.147:22-68.220.241.50:49952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:47.942953 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 23:47:47.946625 systemd-logind[1945]: Session 10 logged out. Waiting for processes to exit. Jan 13 23:47:47.951838 systemd-logind[1945]: Removed session 10. Jan 13 23:47:48.217242 containerd[1984]: time="2026-01-13T23:47:48.217081733Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:47:48.219909 containerd[1984]: time="2026-01-13T23:47:48.219826361Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:47:48.220042 containerd[1984]: time="2026-01-13T23:47:48.219959525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:47:48.220298 kubelet[3522]: E0113 23:47:48.220228 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:47:48.220984 kubelet[3522]: E0113 23:47:48.220410 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:47:48.220984 kubelet[3522]: E0113 23:47:48.220735 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86d44f7875-phhfv_calico-system(12f7eaa9-9d20-4e8f-9f20-d2118b28d17a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:47:48.222451 kubelet[3522]: E0113 23:47:48.222387 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d44f7875-phhfv" podUID="12f7eaa9-9d20-4e8f-9f20-d2118b28d17a" Jan 13 23:47:49.900865 containerd[1984]: time="2026-01-13T23:47:49.899067922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:47:50.220443 containerd[1984]: time="2026-01-13T23:47:50.220271851Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:47:50.222859 containerd[1984]: time="2026-01-13T23:47:50.222788551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:47:50.223000 containerd[1984]: time="2026-01-13T23:47:50.222913483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:47:50.223374 kubelet[3522]: E0113 23:47:50.223316 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:47:50.223922 kubelet[3522]: E0113 23:47:50.223385 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:47:50.224174 kubelet[3522]: E0113 23:47:50.223956 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9hmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d495df4b7-p456r_calico-apiserver(38c0b046-9ddd-4e0f-99d2-de8c0748710c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:47:50.225767 kubelet[3522]: E0113 23:47:50.225578 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-p456r" podUID="38c0b046-9ddd-4e0f-99d2-de8c0748710c" Jan 13 23:47:53.011704 systemd[1]: Started sshd@9-172.31.28.147:22-68.220.241.50:57058.service - OpenSSH per-connection server daemon (68.220.241.50:57058). Jan 13 23:47:53.020722 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:47:53.020867 kernel: audit: type=1130 audit(1768348073.010:782): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.28.147:22-68.220.241.50:57058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:53.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.28.147:22-68.220.241.50:57058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:53.470000 audit[5564]: USER_ACCT pid=5564 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:53.472717 sshd[5564]: Accepted publickey for core from 68.220.241.50 port 57058 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:47:53.478827 kernel: audit: type=1101 audit(1768348073.470:783): pid=5564 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:53.477000 audit[5564]: CRED_ACQ pid=5564 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:53.481818 sshd-session[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:47:53.488561 kernel: audit: type=1103 audit(1768348073.477:784): pid=5564 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:53.488688 kernel: audit: type=1006 audit(1768348073.478:785): pid=5564 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 13 23:47:53.478000 audit[5564]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe0d72530 a2=3 a3=0 items=0 ppid=1 pid=5564 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:53.495615 kernel: audit: type=1300 audit(1768348073.478:785): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe0d72530 a2=3 a3=0 items=0 ppid=1 pid=5564 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:53.478000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:47:53.501479 kernel: audit: type=1327 audit(1768348073.478:785): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:47:53.505812 systemd-logind[1945]: New session 11 of user core. Jan 13 23:47:53.513820 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 23:47:53.519000 audit[5564]: USER_START pid=5564 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:53.528584 kernel: audit: type=1105 audit(1768348073.519:786): pid=5564 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:53.527000 audit[5570]: CRED_ACQ pid=5570 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:53.535589 kernel: audit: type=1103 audit(1768348073.527:787): pid=5570 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:53.847647 sshd[5570]: Connection closed by 68.220.241.50 port 57058 Jan 13 23:47:53.850428 sshd-session[5564]: pam_unix(sshd:session): session closed for user core Jan 13 23:47:53.851000 audit[5564]: USER_END pid=5564 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:53.861109 systemd[1]: sshd@9-172.31.28.147:22-68.220.241.50:57058.service: Deactivated successfully. Jan 13 23:47:53.862574 kernel: audit: type=1106 audit(1768348073.851:788): pid=5564 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:53.862705 kernel: audit: type=1104 audit(1768348073.852:789): pid=5564 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:53.852000 audit[5564]: CRED_DISP pid=5564 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:53.867267 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 23:47:53.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.28.147:22-68.220.241.50:57058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:53.873980 systemd-logind[1945]: Session 11 logged out. Waiting for processes to exit. Jan 13 23:47:53.879067 systemd-logind[1945]: Removed session 11. Jan 13 23:47:53.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.28.147:22-68.220.241.50:57066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:53.943986 systemd[1]: Started sshd@10-172.31.28.147:22-68.220.241.50:57066.service - OpenSSH per-connection server daemon (68.220.241.50:57066). Jan 13 23:47:54.457000 audit[5582]: USER_ACCT pid=5582 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:54.460807 sshd[5582]: Accepted publickey for core from 68.220.241.50 port 57066 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:47:54.460000 audit[5582]: CRED_ACQ pid=5582 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:54.461000 audit[5582]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff27c2df0 a2=3 a3=0 items=0 ppid=1 pid=5582 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:54.461000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:47:54.464081 sshd-session[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:47:54.472483 systemd-logind[1945]: New session 12 of user core. Jan 13 23:47:54.482813 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 23:47:54.487000 audit[5582]: USER_START pid=5582 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:54.491000 audit[5586]: CRED_ACQ pid=5586 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:54.914152 sshd[5586]: Connection closed by 68.220.241.50 port 57066 Jan 13 23:47:54.915216 sshd-session[5582]: pam_unix(sshd:session): session closed for user core Jan 13 23:47:54.918000 audit[5582]: USER_END pid=5582 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:54.918000 audit[5582]: CRED_DISP pid=5582 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:54.925464 systemd[1]: sshd@10-172.31.28.147:22-68.220.241.50:57066.service: Deactivated successfully. Jan 13 23:47:54.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.28.147:22-68.220.241.50:57066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:54.930571 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 23:47:54.933465 systemd-logind[1945]: Session 12 logged out. Waiting for processes to exit. Jan 13 23:47:54.937653 systemd-logind[1945]: Removed session 12. Jan 13 23:47:55.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.28.147:22-68.220.241.50:57082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:55.017014 systemd[1]: Started sshd@11-172.31.28.147:22-68.220.241.50:57082.service - OpenSSH per-connection server daemon (68.220.241.50:57082). Jan 13 23:47:55.505000 audit[5596]: USER_ACCT pid=5596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:55.508532 sshd[5596]: Accepted publickey for core from 68.220.241.50 port 57082 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:47:55.508000 audit[5596]: CRED_ACQ pid=5596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:55.508000 audit[5596]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc73dc3d0 a2=3 a3=0 items=0 ppid=1 pid=5596 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:55.508000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:47:55.512155 sshd-session[5596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:47:55.522607 systemd-logind[1945]: New session 13 of user core. Jan 13 23:47:55.532825 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 23:47:55.538000 audit[5596]: USER_START pid=5596 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:55.542000 audit[5600]: CRED_ACQ pid=5600 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:55.886796 sshd[5600]: Connection closed by 68.220.241.50 port 57082 Jan 13 23:47:55.888241 sshd-session[5596]: pam_unix(sshd:session): session closed for user core Jan 13 23:47:55.890000 audit[5596]: USER_END pid=5596 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:55.891000 audit[5596]: CRED_DISP pid=5596 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:47:55.901062 systemd[1]: sshd@11-172.31.28.147:22-68.220.241.50:57082.service: Deactivated successfully. Jan 13 23:47:55.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.28.147:22-68.220.241.50:57082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:55.907209 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 23:47:55.910301 systemd-logind[1945]: Session 13 logged out. Waiting for processes to exit. Jan 13 23:47:55.913977 systemd-logind[1945]: Removed session 13. Jan 13 23:47:56.899145 kubelet[3522]: E0113 23:47:56.898531 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-2bldm" podUID="db4e6234-4e3a-4385-ad17-4564cf1a27b8" Jan 13 23:47:56.900975 kubelet[3522]: E0113 23:47:56.900902 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:47:57.910543 kubelet[3522]: E0113 23:47:57.908280 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74c798dd46-h9tcx" podUID="d4afc04a-ba35-4c2d-9726-5e2240fe2e11" Jan 13 23:47:59.900465 kubelet[3522]: E0113 23:47:59.899916 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6fffm" podUID="f27ddc53-e7e4-41b7-97d9-616c5339cc85" Jan 13 23:48:00.978453 systemd[1]: Started sshd@12-172.31.28.147:22-68.220.241.50:57096.service - OpenSSH per-connection server daemon (68.220.241.50:57096). Jan 13 23:48:00.986284 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 13 23:48:00.986435 kernel: audit: type=1130 audit(1768348080.977:809): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.28.147:22-68.220.241.50:57096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:00.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.28.147:22-68.220.241.50:57096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:01.454000 audit[5642]: USER_ACCT pid=5642 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:01.462603 sshd[5642]: Accepted publickey for core from 68.220.241.50 port 57096 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:48:01.461000 audit[5642]: CRED_ACQ pid=5642 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:01.465229 sshd-session[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:01.469328 kernel: audit: type=1101 audit(1768348081.454:810): pid=5642 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:01.469437 kernel: audit: type=1103 audit(1768348081.461:811): pid=5642 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:01.474416 kernel: audit: type=1006 audit(1768348081.462:812): pid=5642 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 13 23:48:01.462000 audit[5642]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd51f5ef0 a2=3 a3=0 items=0 ppid=1 pid=5642 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:01.481928 kernel: audit: type=1300 audit(1768348081.462:812): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd51f5ef0 a2=3 a3=0 items=0 ppid=1 pid=5642 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:01.462000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:01.485406 kernel: audit: type=1327 audit(1768348081.462:812): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:01.486172 systemd-logind[1945]: New session 14 of user core. Jan 13 23:48:01.493798 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 23:48:01.500000 audit[5642]: USER_START pid=5642 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:01.504000 audit[5646]: CRED_ACQ pid=5646 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:01.514543 kernel: audit: type=1105 audit(1768348081.500:813): pid=5642 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:01.514701 kernel: audit: type=1103 audit(1768348081.504:814): pid=5646 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:01.835617 sshd[5646]: Connection closed by 68.220.241.50 port 57096 Jan 13 23:48:01.837290 sshd-session[5642]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:01.839000 audit[5642]: USER_END pid=5642 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:01.847691 systemd[1]: sshd@12-172.31.28.147:22-68.220.241.50:57096.service: Deactivated successfully. Jan 13 23:48:01.839000 audit[5642]: CRED_DISP pid=5642 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:01.855968 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 23:48:01.859827 kernel: audit: type=1106 audit(1768348081.839:815): pid=5642 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:01.859958 kernel: audit: type=1104 audit(1768348081.839:816): pid=5642 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:01.860556 systemd-logind[1945]: Session 14 logged out. Waiting for processes to exit. Jan 13 23:48:01.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.28.147:22-68.220.241.50:57096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:01.865266 systemd-logind[1945]: Removed session 14. Jan 13 23:48:03.900054 kubelet[3522]: E0113 23:48:03.899146 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d44f7875-phhfv" podUID="12f7eaa9-9d20-4e8f-9f20-d2118b28d17a" Jan 13 23:48:03.901097 kubelet[3522]: E0113 23:48:03.901007 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-p456r" podUID="38c0b046-9ddd-4e0f-99d2-de8c0748710c" Jan 13 23:48:06.934984 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:48:06.935134 kernel: audit: type=1130 audit(1768348086.931:818): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.28.147:22-68.220.241.50:34164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:06.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.28.147:22-68.220.241.50:34164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:06.932763 systemd[1]: Started sshd@13-172.31.28.147:22-68.220.241.50:34164.service - OpenSSH per-connection server daemon (68.220.241.50:34164). Jan 13 23:48:07.417000 audit[5659]: USER_ACCT pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:07.425736 sshd[5659]: Accepted publickey for core from 68.220.241.50 port 34164 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:48:07.429123 sshd-session[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:07.425000 audit[5659]: CRED_ACQ pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:07.436776 kernel: audit: type=1101 audit(1768348087.417:819): pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:07.436907 kernel: audit: type=1103 audit(1768348087.425:820): pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:07.441256 kernel: audit: type=1006 audit(1768348087.426:821): pid=5659 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 13 23:48:07.426000 audit[5659]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff64e3be0 a2=3 a3=0 items=0 ppid=1 pid=5659 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:07.449612 kernel: audit: type=1300 audit(1768348087.426:821): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff64e3be0 a2=3 a3=0 items=0 ppid=1 pid=5659 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:07.426000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:07.452290 kernel: audit: type=1327 audit(1768348087.426:821): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:07.460639 systemd-logind[1945]: New session 15 of user core. Jan 13 23:48:07.468959 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 23:48:07.478000 audit[5659]: USER_START pid=5659 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:07.489000 audit[5663]: CRED_ACQ pid=5663 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:07.497285 kernel: audit: type=1105 audit(1768348087.478:822): pid=5659 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:07.497416 kernel: audit: type=1103 audit(1768348087.489:823): pid=5663 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:07.915683 sshd[5663]: Connection closed by 68.220.241.50 port 34164 Jan 13 23:48:07.916836 sshd-session[5659]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:07.919000 audit[5659]: USER_END pid=5659 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:07.930407 systemd[1]: sshd@13-172.31.28.147:22-68.220.241.50:34164.service: Deactivated successfully. Jan 13 23:48:07.919000 audit[5659]: CRED_DISP pid=5659 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:07.936801 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 23:48:07.941927 kernel: audit: type=1106 audit(1768348087.919:824): pid=5659 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:07.942275 kernel: audit: type=1104 audit(1768348087.919:825): pid=5659 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:07.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.28.147:22-68.220.241.50:34164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:07.943183 systemd-logind[1945]: Session 15 logged out. Waiting for processes to exit. Jan 13 23:48:07.948396 systemd-logind[1945]: Removed session 15. Jan 13 23:48:08.900245 containerd[1984]: time="2026-01-13T23:48:08.899902516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:48:09.162057 containerd[1984]: time="2026-01-13T23:48:09.161913745Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:48:09.164122 containerd[1984]: time="2026-01-13T23:48:09.164057461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:48:09.164449 containerd[1984]: time="2026-01-13T23:48:09.164255545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:48:09.166378 kubelet[3522]: E0113 23:48:09.164804 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:48:09.166378 kubelet[3522]: E0113 23:48:09.164866 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:48:09.166378 kubelet[3522]: E0113 23:48:09.165042 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d29b182626494232aa116abb84d9adc5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-62ft2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74c798dd46-h9tcx_calico-system(d4afc04a-ba35-4c2d-9726-5e2240fe2e11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:48:09.170090 containerd[1984]: time="2026-01-13T23:48:09.169181678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:48:09.427132 containerd[1984]: time="2026-01-13T23:48:09.426627015Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:48:09.429154 containerd[1984]: time="2026-01-13T23:48:09.429019299Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:48:09.429154 containerd[1984]: time="2026-01-13T23:48:09.429082779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:48:09.429632 kubelet[3522]: E0113 23:48:09.429568 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:48:09.429748 kubelet[3522]: E0113 23:48:09.429645 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:48:09.429947 kubelet[3522]: E0113 23:48:09.429822 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62ft2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74c798dd46-h9tcx_calico-system(d4afc04a-ba35-4c2d-9726-5e2240fe2e11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:48:09.431531 kubelet[3522]: E0113 23:48:09.431323 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74c798dd46-h9tcx" podUID="d4afc04a-ba35-4c2d-9726-5e2240fe2e11" Jan 13 23:48:09.902903 containerd[1984]: time="2026-01-13T23:48:09.901929305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:48:10.184008 containerd[1984]: time="2026-01-13T23:48:10.183847275Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:48:10.185551 containerd[1984]: time="2026-01-13T23:48:10.185462127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:48:10.185708 containerd[1984]: time="2026-01-13T23:48:10.185597379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:48:10.185985 kubelet[3522]: E0113 23:48:10.185871 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:48:10.185985 kubelet[3522]: E0113 23:48:10.185970 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:48:10.187212 kubelet[3522]: E0113 23:48:10.186278 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ptt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d495df4b7-2bldm_calico-apiserver(db4e6234-4e3a-4385-ad17-4564cf1a27b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:48:10.187733 kubelet[3522]: E0113 23:48:10.187467 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-2bldm" podUID="db4e6234-4e3a-4385-ad17-4564cf1a27b8" Jan 13 23:48:10.188639 containerd[1984]: time="2026-01-13T23:48:10.187184187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:48:10.456023 containerd[1984]: time="2026-01-13T23:48:10.455867800Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:48:10.457673 containerd[1984]: time="2026-01-13T23:48:10.457598644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:48:10.457799 containerd[1984]: time="2026-01-13T23:48:10.457724524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:48:10.458417 kubelet[3522]: E0113 23:48:10.458052 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:48:10.458417 kubelet[3522]: E0113 23:48:10.458115 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:48:10.458417 kubelet[3522]: E0113 23:48:10.458323 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbgsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t67lh_calico-system(62188975-46ce-424e-8434-9b05ea3b2915): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:48:10.462558 containerd[1984]: time="2026-01-13T23:48:10.462382888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:48:10.719479 containerd[1984]: time="2026-01-13T23:48:10.719294933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:48:10.721118 containerd[1984]: time="2026-01-13T23:48:10.721047233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:48:10.721544 containerd[1984]: time="2026-01-13T23:48:10.721169261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:48:10.721676 kubelet[3522]: E0113 23:48:10.721478 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:48:10.721879 kubelet[3522]: E0113 23:48:10.721764 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:48:10.722394 kubelet[3522]: E0113 23:48:10.722201 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbgsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t67lh_calico-system(62188975-46ce-424e-8434-9b05ea3b2915): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:48:10.723835 kubelet[3522]: E0113 23:48:10.723757 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:48:13.009345 systemd[1]: Started sshd@14-172.31.28.147:22-68.220.241.50:51790.service - OpenSSH per-connection server daemon (68.220.241.50:51790). Jan 13 23:48:13.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.28.147:22-68.220.241.50:51790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:13.012663 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:48:13.012735 kernel: audit: type=1130 audit(1768348093.009:827): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.28.147:22-68.220.241.50:51790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:13.490000 audit[5683]: USER_ACCT pid=5683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:13.494746 sshd[5683]: Accepted publickey for core from 68.220.241.50 port 51790 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:48:13.500557 kernel: audit: type=1101 audit(1768348093.490:828): pid=5683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:13.500696 kernel: audit: type=1103 audit(1768348093.497:829): pid=5683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:13.497000 audit[5683]: CRED_ACQ pid=5683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:13.501573 sshd-session[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:13.509776 kernel: audit: type=1006 audit(1768348093.498:830): pid=5683 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 13 23:48:13.510553 kernel: audit: type=1300 audit(1768348093.498:830): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffce8eb910 a2=3 a3=0 items=0 ppid=1 pid=5683 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:13.498000 audit[5683]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffce8eb910 a2=3 a3=0 items=0 ppid=1 pid=5683 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:13.498000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:13.520777 kernel: audit: type=1327 audit(1768348093.498:830): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:13.523823 systemd-logind[1945]: New session 16 of user core. Jan 13 23:48:13.533832 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 23:48:13.564000 audit[5683]: USER_START pid=5683 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:13.575604 kernel: audit: type=1105 audit(1768348093.564:831): pid=5683 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:13.575723 kernel: audit: type=1103 audit(1768348093.573:832): pid=5689 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:13.573000 audit[5689]: CRED_ACQ pid=5689 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:13.874978 sshd[5689]: Connection closed by 68.220.241.50 port 51790 Jan 13 23:48:13.875927 sshd-session[5683]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:13.879000 audit[5683]: USER_END pid=5683 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:13.885918 systemd[1]: sshd@14-172.31.28.147:22-68.220.241.50:51790.service: Deactivated successfully. Jan 13 23:48:13.879000 audit[5683]: CRED_DISP pid=5683 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:13.891942 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 23:48:13.893983 kernel: audit: type=1106 audit(1768348093.879:833): pid=5683 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:13.894103 kernel: audit: type=1104 audit(1768348093.879:834): pid=5683 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:13.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.28.147:22-68.220.241.50:51790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:13.898620 systemd-logind[1945]: Session 16 logged out. Waiting for processes to exit. Jan 13 23:48:13.901342 systemd-logind[1945]: Removed session 16. Jan 13 23:48:14.897477 containerd[1984]: time="2026-01-13T23:48:14.897330274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:48:15.153802 containerd[1984]: time="2026-01-13T23:48:15.153647263Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:48:15.155466 containerd[1984]: time="2026-01-13T23:48:15.155402419Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:48:15.155611 containerd[1984]: time="2026-01-13T23:48:15.155551399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:48:15.155985 kubelet[3522]: E0113 23:48:15.155847 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:48:15.155985 kubelet[3522]: E0113 23:48:15.155910 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:48:15.156941 kubelet[3522]: E0113 23:48:15.156776 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5g5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6fffm_calico-system(f27ddc53-e7e4-41b7-97d9-616c5339cc85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:48:15.158122 kubelet[3522]: E0113 23:48:15.158046 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6fffm" podUID="f27ddc53-e7e4-41b7-97d9-616c5339cc85" Jan 13 23:48:17.902694 containerd[1984]: time="2026-01-13T23:48:17.902268361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:48:18.167381 containerd[1984]: time="2026-01-13T23:48:18.167112718Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:48:18.169563 containerd[1984]: time="2026-01-13T23:48:18.169402270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:48:18.169563 containerd[1984]: time="2026-01-13T23:48:18.169483150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:48:18.169949 kubelet[3522]: E0113 23:48:18.169903 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:48:18.171349 kubelet[3522]: E0113 23:48:18.170437 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:48:18.171349 kubelet[3522]: E0113 23:48:18.170763 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9hmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d495df4b7-p456r_calico-apiserver(38c0b046-9ddd-4e0f-99d2-de8c0748710c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:48:18.171830 containerd[1984]: time="2026-01-13T23:48:18.171772534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:48:18.172483 kubelet[3522]: E0113 23:48:18.172424 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-p456r" podUID="38c0b046-9ddd-4e0f-99d2-de8c0748710c" Jan 13 23:48:18.436396 containerd[1984]: time="2026-01-13T23:48:18.436250508Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:48:18.438771 containerd[1984]: time="2026-01-13T23:48:18.438642456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:48:18.439041 containerd[1984]: time="2026-01-13T23:48:18.438707460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:48:18.439439 kubelet[3522]: E0113 23:48:18.439372 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:48:18.439629 kubelet[3522]: E0113 23:48:18.439442 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:48:18.439765 kubelet[3522]: E0113 23:48:18.439674 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86d44f7875-phhfv_calico-system(12f7eaa9-9d20-4e8f-9f20-d2118b28d17a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:48:18.441633 kubelet[3522]: E0113 23:48:18.441560 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d44f7875-phhfv" podUID="12f7eaa9-9d20-4e8f-9f20-d2118b28d17a" Jan 13 23:48:18.969255 systemd[1]: Started sshd@15-172.31.28.147:22-68.220.241.50:51796.service - OpenSSH per-connection server daemon (68.220.241.50:51796). Jan 13 23:48:18.972851 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:48:18.972905 kernel: audit: type=1130 audit(1768348098.967:836): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.28.147:22-68.220.241.50:51796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:18.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.28.147:22-68.220.241.50:51796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:19.436000 audit[5701]: USER_ACCT pid=5701 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:19.439723 sshd[5701]: Accepted publickey for core from 68.220.241.50 port 51796 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:48:19.446545 kernel: audit: type=1101 audit(1768348099.436:837): pid=5701 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:19.452693 kernel: audit: type=1103 audit(1768348099.445:838): pid=5701 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:19.445000 audit[5701]: CRED_ACQ pid=5701 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:19.448310 sshd-session[5701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:19.459316 kernel: audit: type=1006 audit(1768348099.445:839): pid=5701 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 13 23:48:19.445000 audit[5701]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee3fda30 a2=3 a3=0 items=0 ppid=1 pid=5701 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:19.466992 kernel: audit: type=1300 audit(1768348099.445:839): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee3fda30 a2=3 a3=0 items=0 ppid=1 pid=5701 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:19.445000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:19.469951 kernel: audit: type=1327 audit(1768348099.445:839): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:19.474706 systemd-logind[1945]: New session 17 of user core. Jan 13 23:48:19.482813 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 23:48:19.488000 audit[5701]: USER_START pid=5701 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:19.499555 kernel: audit: type=1105 audit(1768348099.488:840): pid=5701 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:19.498000 audit[5705]: CRED_ACQ pid=5705 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:19.507547 kernel: audit: type=1103 audit(1768348099.498:841): pid=5705 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:19.816637 sshd[5705]: Connection closed by 68.220.241.50 port 51796 Jan 13 23:48:19.815640 sshd-session[5701]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:19.817000 audit[5701]: USER_END pid=5701 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:19.818000 audit[5701]: CRED_DISP pid=5701 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:19.835197 kernel: audit: type=1106 audit(1768348099.817:842): pid=5701 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:19.835303 kernel: audit: type=1104 audit(1768348099.818:843): pid=5701 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:19.836136 systemd[1]: sshd@15-172.31.28.147:22-68.220.241.50:51796.service: Deactivated successfully. Jan 13 23:48:19.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.28.147:22-68.220.241.50:51796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:19.839893 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 23:48:19.842416 systemd-logind[1945]: Session 17 logged out. Waiting for processes to exit. Jan 13 23:48:19.847292 systemd-logind[1945]: Removed session 17. Jan 13 23:48:19.908667 systemd[1]: Started sshd@16-172.31.28.147:22-68.220.241.50:51812.service - OpenSSH per-connection server daemon (68.220.241.50:51812). Jan 13 23:48:19.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.28.147:22-68.220.241.50:51812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:20.380000 audit[5717]: USER_ACCT pid=5717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:20.382314 sshd[5717]: Accepted publickey for core from 68.220.241.50 port 51812 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:48:20.382000 audit[5717]: CRED_ACQ pid=5717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:20.382000 audit[5717]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdd9460f0 a2=3 a3=0 items=0 ppid=1 pid=5717 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:20.382000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:20.385880 sshd-session[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:20.394923 systemd-logind[1945]: New session 18 of user core. Jan 13 23:48:20.402815 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 23:48:20.408000 audit[5717]: USER_START pid=5717 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:20.411000 audit[5721]: CRED_ACQ pid=5721 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:21.887393 sshd[5721]: Connection closed by 68.220.241.50 port 51812 Jan 13 23:48:21.888366 sshd-session[5717]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:21.892000 audit[5717]: USER_END pid=5717 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:21.892000 audit[5717]: CRED_DISP pid=5717 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:21.901652 systemd-logind[1945]: Session 18 logged out. Waiting for processes to exit. Jan 13 23:48:21.901781 systemd[1]: sshd@16-172.31.28.147:22-68.220.241.50:51812.service: Deactivated successfully. Jan 13 23:48:21.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.28.147:22-68.220.241.50:51812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:21.909187 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 23:48:21.913653 systemd-logind[1945]: Removed session 18. Jan 13 23:48:21.980475 systemd[1]: Started sshd@17-172.31.28.147:22-68.220.241.50:51820.service - OpenSSH per-connection server daemon (68.220.241.50:51820). Jan 13 23:48:21.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.28.147:22-68.220.241.50:51820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:22.447000 audit[5731]: USER_ACCT pid=5731 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:22.449720 sshd[5731]: Accepted publickey for core from 68.220.241.50 port 51820 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:48:22.450000 audit[5731]: CRED_ACQ pid=5731 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:22.450000 audit[5731]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd1744e40 a2=3 a3=0 items=0 ppid=1 pid=5731 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.450000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:22.454051 sshd-session[5731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:22.463066 systemd-logind[1945]: New session 19 of user core. Jan 13 23:48:22.478833 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 23:48:22.484000 audit[5731]: USER_START pid=5731 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:22.487000 audit[5735]: CRED_ACQ pid=5735 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:23.758000 audit[5750]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5750 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:23.758000 audit[5750]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffef12b8a0 a2=0 a3=1 items=0 ppid=3635 pid=5750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:23.758000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:23.770000 audit[5750]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5750 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:23.770000 audit[5750]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffef12b8a0 a2=0 a3=1 items=0 ppid=3635 pid=5750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:23.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:23.800000 audit[5752]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5752 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:23.800000 audit[5752]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd66da3c0 a2=0 a3=1 items=0 ppid=3635 pid=5752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:23.800000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:23.804029 sshd[5735]: Connection closed by 68.220.241.50 port 51820 Jan 13 23:48:23.804452 sshd-session[5731]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:23.807000 audit[5731]: USER_END pid=5731 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:23.807000 audit[5731]: CRED_DISP pid=5731 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:23.809000 audit[5752]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5752 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:23.809000 audit[5752]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd66da3c0 a2=0 a3=1 items=0 ppid=3635 pid=5752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:23.809000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:23.815365 systemd[1]: sshd@17-172.31.28.147:22-68.220.241.50:51820.service: Deactivated successfully. Jan 13 23:48:23.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.28.147:22-68.220.241.50:51820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:23.823207 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 23:48:23.827180 systemd-logind[1945]: Session 19 logged out. Waiting for processes to exit. Jan 13 23:48:23.829831 systemd-logind[1945]: Removed session 19. Jan 13 23:48:23.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.28.147:22-68.220.241.50:42350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:23.900604 systemd[1]: Started sshd@18-172.31.28.147:22-68.220.241.50:42350.service - OpenSSH per-connection server daemon (68.220.241.50:42350). Jan 13 23:48:23.915632 kubelet[3522]: E0113 23:48:23.914928 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:48:24.382000 audit[5757]: USER_ACCT pid=5757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:24.385287 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 13 23:48:24.385384 kernel: audit: type=1101 audit(1768348104.382:868): pid=5757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:24.388450 sshd[5757]: Accepted publickey for core from 68.220.241.50 port 42350 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:48:24.390000 audit[5757]: CRED_ACQ pid=5757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:24.394358 sshd-session[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:24.401845 kernel: audit: type=1103 audit(1768348104.390:869): pid=5757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:24.401995 kernel: audit: type=1006 audit(1768348104.391:870): pid=5757 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 13 23:48:24.402050 kernel: audit: type=1300 audit(1768348104.391:870): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffff79bb40 a2=3 a3=0 items=0 ppid=1 pid=5757 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:24.391000 audit[5757]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffff79bb40 a2=3 a3=0 items=0 ppid=1 pid=5757 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:24.391000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:24.412796 kernel: audit: type=1327 audit(1768348104.391:870): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:24.414612 systemd-logind[1945]: New session 20 of user core. Jan 13 23:48:24.421825 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 23:48:24.427000 audit[5757]: USER_START pid=5757 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:24.436595 kernel: audit: type=1105 audit(1768348104.427:871): pid=5757 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:24.435000 audit[5761]: CRED_ACQ pid=5761 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:24.443552 kernel: audit: type=1103 audit(1768348104.435:872): pid=5761 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:24.901022 kubelet[3522]: E0113 23:48:24.897840 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-2bldm" podUID="db4e6234-4e3a-4385-ad17-4564cf1a27b8" Jan 13 23:48:24.903100 kubelet[3522]: E0113 23:48:24.902897 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74c798dd46-h9tcx" podUID="d4afc04a-ba35-4c2d-9726-5e2240fe2e11" Jan 13 23:48:25.096965 sshd[5761]: Connection closed by 68.220.241.50 port 42350 Jan 13 23:48:25.096863 sshd-session[5757]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:25.098000 audit[5757]: USER_END pid=5757 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:25.107261 systemd[1]: sshd@18-172.31.28.147:22-68.220.241.50:42350.service: Deactivated successfully. Jan 13 23:48:25.098000 audit[5757]: CRED_DISP pid=5757 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:25.115617 kernel: audit: type=1106 audit(1768348105.098:873): pid=5757 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:25.115726 kernel: audit: type=1104 audit(1768348105.098:874): pid=5757 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:25.112854 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 23:48:25.115807 systemd-logind[1945]: Session 20 logged out. Waiting for processes to exit. Jan 13 23:48:25.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.28.147:22-68.220.241.50:42350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:25.122404 kernel: audit: type=1131 audit(1768348105.106:875): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.28.147:22-68.220.241.50:42350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:25.120234 systemd-logind[1945]: Removed session 20. Jan 13 23:48:25.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.28.147:22-68.220.241.50:42358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:25.189568 systemd[1]: Started sshd@19-172.31.28.147:22-68.220.241.50:42358.service - OpenSSH per-connection server daemon (68.220.241.50:42358). Jan 13 23:48:25.654000 audit[5771]: USER_ACCT pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:25.655960 sshd[5771]: Accepted publickey for core from 68.220.241.50 port 42358 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:48:25.656000 audit[5771]: CRED_ACQ pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:25.657000 audit[5771]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcdd38d60 a2=3 a3=0 items=0 ppid=1 pid=5771 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:25.657000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:25.660138 sshd-session[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:25.669634 systemd-logind[1945]: New session 21 of user core. Jan 13 23:48:25.674886 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 23:48:25.682000 audit[5771]: USER_START pid=5771 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:25.687000 audit[5775]: CRED_ACQ pid=5775 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:26.017821 sshd[5775]: Connection closed by 68.220.241.50 port 42358 Jan 13 23:48:26.019201 sshd-session[5771]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:26.020000 audit[5771]: USER_END pid=5771 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:26.021000 audit[5771]: CRED_DISP pid=5771 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:26.028282 systemd[1]: sshd@19-172.31.28.147:22-68.220.241.50:42358.service: Deactivated successfully. Jan 13 23:48:26.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.28.147:22-68.220.241.50:42358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:26.033432 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 23:48:26.035245 systemd-logind[1945]: Session 21 logged out. Waiting for processes to exit. Jan 13 23:48:26.039045 systemd-logind[1945]: Removed session 21. Jan 13 23:48:27.898950 kubelet[3522]: E0113 23:48:27.898856 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6fffm" podUID="f27ddc53-e7e4-41b7-97d9-616c5339cc85" Jan 13 23:48:30.803000 audit[5814]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5814 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:30.807538 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 13 23:48:30.807644 kernel: audit: type=1325 audit(1768348110.803:885): table=filter:148 family=2 entries=26 op=nft_register_rule pid=5814 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:30.803000 audit[5814]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe142d300 a2=0 a3=1 items=0 ppid=3635 pid=5814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:30.817234 kernel: audit: type=1300 audit(1768348110.803:885): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe142d300 a2=0 a3=1 items=0 ppid=3635 pid=5814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:30.803000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:30.821115 kernel: audit: type=1327 audit(1768348110.803:885): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:30.821227 kernel: audit: type=1325 audit(1768348110.809:886): table=nat:149 family=2 entries=104 op=nft_register_chain pid=5814 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:30.809000 audit[5814]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5814 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:30.809000 audit[5814]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe142d300 a2=0 a3=1 items=0 ppid=3635 pid=5814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:30.832299 kernel: audit: type=1300 audit(1768348110.809:886): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe142d300 a2=0 a3=1 items=0 ppid=3635 pid=5814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:30.832425 kernel: audit: type=1327 audit(1768348110.809:886): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:30.809000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:30.898306 kubelet[3522]: E0113 23:48:30.898216 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-p456r" podUID="38c0b046-9ddd-4e0f-99d2-de8c0748710c" Jan 13 23:48:31.111068 systemd[1]: Started sshd@20-172.31.28.147:22-68.220.241.50:42374.service - OpenSSH per-connection server daemon (68.220.241.50:42374). Jan 13 23:48:31.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.28.147:22-68.220.241.50:42374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:31.119597 kernel: audit: type=1130 audit(1768348111.110:887): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.28.147:22-68.220.241.50:42374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:31.577000 audit[5816]: USER_ACCT pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:31.585628 sshd[5816]: Accepted publickey for core from 68.220.241.50 port 42374 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:48:31.583000 audit[5816]: CRED_ACQ pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:31.586954 sshd-session[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:31.592036 kernel: audit: type=1101 audit(1768348111.577:888): pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:31.592107 kernel: audit: type=1103 audit(1768348111.583:889): pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:31.596775 kernel: audit: type=1006 audit(1768348111.583:890): pid=5816 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 13 23:48:31.583000 audit[5816]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd6a09d0 a2=3 a3=0 items=0 ppid=1 pid=5816 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:31.583000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:31.604495 systemd-logind[1945]: New session 22 of user core. Jan 13 23:48:31.610859 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 23:48:31.616000 audit[5816]: USER_START pid=5816 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:31.619000 audit[5820]: CRED_ACQ pid=5820 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:31.903930 kubelet[3522]: E0113 23:48:31.902773 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d44f7875-phhfv" podUID="12f7eaa9-9d20-4e8f-9f20-d2118b28d17a" Jan 13 23:48:31.941630 sshd[5820]: Connection closed by 68.220.241.50 port 42374 Jan 13 23:48:31.942433 sshd-session[5816]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:31.943000 audit[5816]: USER_END pid=5816 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:31.943000 audit[5816]: CRED_DISP pid=5816 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:31.950881 systemd[1]: sshd@20-172.31.28.147:22-68.220.241.50:42374.service: Deactivated successfully. Jan 13 23:48:31.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.28.147:22-68.220.241.50:42374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:31.955472 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 23:48:31.959228 systemd-logind[1945]: Session 22 logged out. Waiting for processes to exit. Jan 13 23:48:31.962946 systemd-logind[1945]: Removed session 22. Jan 13 23:48:35.897591 kubelet[3522]: E0113 23:48:35.897446 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-2bldm" podUID="db4e6234-4e3a-4385-ad17-4564cf1a27b8" Jan 13 23:48:37.042040 systemd[1]: Started sshd@21-172.31.28.147:22-68.220.241.50:57162.service - OpenSSH per-connection server daemon (68.220.241.50:57162). Jan 13 23:48:37.049544 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 13 23:48:37.049679 kernel: audit: type=1130 audit(1768348117.040:896): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.28.147:22-68.220.241.50:57162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:37.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.28.147:22-68.220.241.50:57162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:37.532000 audit[5836]: USER_ACCT pid=5836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:37.540737 sshd[5836]: Accepted publickey for core from 68.220.241.50 port 57162 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:48:37.542543 kernel: audit: type=1101 audit(1768348117.532:897): pid=5836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:37.539000 audit[5836]: CRED_ACQ pid=5836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:37.544601 sshd-session[5836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:37.557402 kernel: audit: type=1103 audit(1768348117.539:898): pid=5836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:37.557571 kernel: audit: type=1006 audit(1768348117.539:899): pid=5836 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 13 23:48:37.539000 audit[5836]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe10b8660 a2=3 a3=0 items=0 ppid=1 pid=5836 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:37.567575 kernel: audit: type=1300 audit(1768348117.539:899): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe10b8660 a2=3 a3=0 items=0 ppid=1 pid=5836 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:37.567795 systemd-logind[1945]: New session 23 of user core. Jan 13 23:48:37.539000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:37.573564 kernel: audit: type=1327 audit(1768348117.539:899): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:37.572824 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 23:48:37.581000 audit[5836]: USER_START pid=5836 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:37.591000 audit[5840]: CRED_ACQ pid=5840 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:37.602912 kernel: audit: type=1105 audit(1768348117.581:900): pid=5836 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:37.603034 kernel: audit: type=1103 audit(1768348117.591:901): pid=5840 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:37.904390 kubelet[3522]: E0113 23:48:37.904063 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:48:38.014179 sshd[5840]: Connection closed by 68.220.241.50 port 57162 Jan 13 23:48:38.015422 sshd-session[5836]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:38.017000 audit[5836]: USER_END pid=5836 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:38.017000 audit[5836]: CRED_DISP pid=5836 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:38.032950 kernel: audit: type=1106 audit(1768348118.017:902): pid=5836 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:38.033078 kernel: audit: type=1104 audit(1768348118.017:903): pid=5836 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:38.031414 systemd[1]: sshd@21-172.31.28.147:22-68.220.241.50:57162.service: Deactivated successfully. Jan 13 23:48:38.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.28.147:22-68.220.241.50:57162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:38.039477 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 23:48:38.047092 systemd-logind[1945]: Session 23 logged out. Waiting for processes to exit. Jan 13 23:48:38.050655 systemd-logind[1945]: Removed session 23. Jan 13 23:48:38.898310 kubelet[3522]: E0113 23:48:38.898221 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6fffm" podUID="f27ddc53-e7e4-41b7-97d9-616c5339cc85" Jan 13 23:48:39.898910 kubelet[3522]: E0113 23:48:39.898787 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74c798dd46-h9tcx" podUID="d4afc04a-ba35-4c2d-9726-5e2240fe2e11" Jan 13 23:48:42.899376 kubelet[3522]: E0113 23:48:42.899295 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d44f7875-phhfv" podUID="12f7eaa9-9d20-4e8f-9f20-d2118b28d17a" Jan 13 23:48:43.115549 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:48:43.115712 kernel: audit: type=1130 audit(1768348123.104:905): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.28.147:22-68.220.241.50:50960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:43.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.28.147:22-68.220.241.50:50960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:43.106026 systemd[1]: Started sshd@22-172.31.28.147:22-68.220.241.50:50960.service - OpenSSH per-connection server daemon (68.220.241.50:50960). Jan 13 23:48:43.586015 sshd[5855]: Accepted publickey for core from 68.220.241.50 port 50960 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:48:43.585000 audit[5855]: USER_ACCT pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:43.594897 sshd-session[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:43.591000 audit[5855]: CRED_ACQ pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:43.601284 kernel: audit: type=1101 audit(1768348123.585:906): pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:43.601417 kernel: audit: type=1103 audit(1768348123.591:907): pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:43.606162 kernel: audit: type=1006 audit(1768348123.591:908): pid=5855 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 13 23:48:43.591000 audit[5855]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdf5935e0 a2=3 a3=0 items=0 ppid=1 pid=5855 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:43.613819 kernel: audit: type=1300 audit(1768348123.591:908): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdf5935e0 a2=3 a3=0 items=0 ppid=1 pid=5855 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:43.591000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:43.617616 kernel: audit: type=1327 audit(1768348123.591:908): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:43.624566 systemd-logind[1945]: New session 24 of user core. Jan 13 23:48:43.633900 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 23:48:43.643000 audit[5855]: USER_START pid=5855 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:43.652000 audit[5859]: CRED_ACQ pid=5859 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:43.661824 kernel: audit: type=1105 audit(1768348123.643:909): pid=5855 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:43.661949 kernel: audit: type=1103 audit(1768348123.652:910): pid=5859 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:43.984686 sshd[5859]: Connection closed by 68.220.241.50 port 50960 Jan 13 23:48:43.985579 sshd-session[5855]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:43.988000 audit[5855]: USER_END pid=5855 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:43.989000 audit[5855]: CRED_DISP pid=5855 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:44.006211 kernel: audit: type=1106 audit(1768348123.988:911): pid=5855 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:44.006451 kernel: audit: type=1104 audit(1768348123.989:912): pid=5855 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:44.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.28.147:22-68.220.241.50:50960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:44.011974 systemd[1]: sshd@22-172.31.28.147:22-68.220.241.50:50960.service: Deactivated successfully. Jan 13 23:48:44.018264 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 23:48:44.024627 systemd-logind[1945]: Session 24 logged out. Waiting for processes to exit. Jan 13 23:48:44.032712 systemd-logind[1945]: Removed session 24. Jan 13 23:48:45.899570 kubelet[3522]: E0113 23:48:45.898180 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-p456r" podUID="38c0b046-9ddd-4e0f-99d2-de8c0748710c" Jan 13 23:48:47.898759 kubelet[3522]: E0113 23:48:47.898673 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-2bldm" podUID="db4e6234-4e3a-4385-ad17-4564cf1a27b8" Jan 13 23:48:49.077289 systemd[1]: Started sshd@23-172.31.28.147:22-68.220.241.50:50970.service - OpenSSH per-connection server daemon (68.220.241.50:50970). Jan 13 23:48:49.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.28.147:22-68.220.241.50:50970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:49.080973 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:48:49.081304 kernel: audit: type=1130 audit(1768348129.077:914): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.28.147:22-68.220.241.50:50970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:49.549000 audit[5872]: USER_ACCT pid=5872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:49.556259 sshd[5872]: Accepted publickey for core from 68.220.241.50 port 50970 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:48:49.559675 sshd-session[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:49.557000 audit[5872]: CRED_ACQ pid=5872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:49.566751 kernel: audit: type=1101 audit(1768348129.549:915): pid=5872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:49.566904 kernel: audit: type=1103 audit(1768348129.557:916): pid=5872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:49.574947 kernel: audit: type=1006 audit(1768348129.557:917): pid=5872 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 13 23:48:49.557000 audit[5872]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffde5a1930 a2=3 a3=0 items=0 ppid=1 pid=5872 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:49.581943 kernel: audit: type=1300 audit(1768348129.557:917): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffde5a1930 a2=3 a3=0 items=0 ppid=1 pid=5872 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:49.557000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:49.584829 kernel: audit: type=1327 audit(1768348129.557:917): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:49.586409 systemd-logind[1945]: New session 25 of user core. Jan 13 23:48:49.599878 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 23:48:49.607000 audit[5872]: USER_START pid=5872 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:49.622553 kernel: audit: type=1105 audit(1768348129.607:918): pid=5872 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:49.622707 kernel: audit: type=1103 audit(1768348129.616:919): pid=5876 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:49.616000 audit[5876]: CRED_ACQ pid=5876 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:49.957453 sshd[5876]: Connection closed by 68.220.241.50 port 50970 Jan 13 23:48:49.958870 sshd-session[5872]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:49.963000 audit[5872]: USER_END pid=5872 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:49.972388 systemd[1]: sshd@23-172.31.28.147:22-68.220.241.50:50970.service: Deactivated successfully. Jan 13 23:48:49.963000 audit[5872]: CRED_DISP pid=5872 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:49.979823 kernel: audit: type=1106 audit(1768348129.963:920): pid=5872 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:49.979949 kernel: audit: type=1104 audit(1768348129.963:921): pid=5872 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:49.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.28.147:22-68.220.241.50:50970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:49.984471 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 23:48:49.993603 systemd-logind[1945]: Session 25 logged out. Waiting for processes to exit. Jan 13 23:48:49.997769 systemd-logind[1945]: Removed session 25. Jan 13 23:48:51.900560 containerd[1984]: time="2026-01-13T23:48:51.898829926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:48:52.171423 containerd[1984]: time="2026-01-13T23:48:52.171121579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:48:52.173686 containerd[1984]: time="2026-01-13T23:48:52.173456287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:48:52.173686 containerd[1984]: time="2026-01-13T23:48:52.173613883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:48:52.174361 kubelet[3522]: E0113 23:48:52.174188 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:48:52.174361 kubelet[3522]: E0113 23:48:52.174260 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:48:52.174992 kubelet[3522]: E0113 23:48:52.174740 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d29b182626494232aa116abb84d9adc5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-62ft2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74c798dd46-h9tcx_calico-system(d4afc04a-ba35-4c2d-9726-5e2240fe2e11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:48:52.176221 containerd[1984]: time="2026-01-13T23:48:52.176149375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:48:52.481298 containerd[1984]: time="2026-01-13T23:48:52.481145781Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:48:52.483434 containerd[1984]: time="2026-01-13T23:48:52.483274269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:48:52.483434 containerd[1984]: time="2026-01-13T23:48:52.483343725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:48:52.484428 kubelet[3522]: E0113 23:48:52.483767 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:48:52.484428 kubelet[3522]: E0113 23:48:52.483826 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:48:52.484428 kubelet[3522]: E0113 23:48:52.484113 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbgsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t67lh_calico-system(62188975-46ce-424e-8434-9b05ea3b2915): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:48:52.485101 containerd[1984]: time="2026-01-13T23:48:52.484748493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:48:52.801367 containerd[1984]: time="2026-01-13T23:48:52.801206950Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:48:52.803994 containerd[1984]: time="2026-01-13T23:48:52.803692174Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:48:52.804620 containerd[1984]: time="2026-01-13T23:48:52.803750974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:48:52.804940 kubelet[3522]: E0113 23:48:52.804845 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:48:52.805044 kubelet[3522]: E0113 23:48:52.804951 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:48:52.806431 kubelet[3522]: E0113 23:48:52.805488 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62ft2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74c798dd46-h9tcx_calico-system(d4afc04a-ba35-4c2d-9726-5e2240fe2e11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:48:52.808559 kubelet[3522]: E0113 23:48:52.808423 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74c798dd46-h9tcx" podUID="d4afc04a-ba35-4c2d-9726-5e2240fe2e11" Jan 13 23:48:52.814670 containerd[1984]: time="2026-01-13T23:48:52.814590310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:48:53.056640 containerd[1984]: time="2026-01-13T23:48:53.055521715Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:48:53.059540 containerd[1984]: time="2026-01-13T23:48:53.059353232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:48:53.059540 containerd[1984]: time="2026-01-13T23:48:53.059430416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:48:53.059778 kubelet[3522]: E0113 23:48:53.059705 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:48:53.059778 kubelet[3522]: E0113 23:48:53.059766 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:48:53.060021 kubelet[3522]: E0113 23:48:53.059935 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbgsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t67lh_calico-system(62188975-46ce-424e-8434-9b05ea3b2915): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:48:53.061627 kubelet[3522]: E0113 23:48:53.061536 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:48:53.907057 kubelet[3522]: E0113 23:48:53.905533 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6fffm" podUID="f27ddc53-e7e4-41b7-97d9-616c5339cc85" Jan 13 23:48:55.063566 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:48:55.063709 kernel: audit: type=1130 audit(1768348135.054:923): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.28.147:22-68.220.241.50:37080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:55.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.28.147:22-68.220.241.50:37080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:55.055198 systemd[1]: Started sshd@24-172.31.28.147:22-68.220.241.50:37080.service - OpenSSH per-connection server daemon (68.220.241.50:37080). Jan 13 23:48:55.538000 audit[5895]: USER_ACCT pid=5895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:55.544697 sshd[5895]: Accepted publickey for core from 68.220.241.50 port 37080 ssh2: RSA SHA256:vOY8WGypEHDe1ucQj1E1thVL6OXIE3/83o6052QrcUg Jan 13 23:48:55.546705 kernel: audit: type=1101 audit(1768348135.538:924): pid=5895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:55.547000 audit[5895]: CRED_ACQ pid=5895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:55.556071 sshd-session[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:55.566565 kernel: audit: type=1103 audit(1768348135.547:925): pid=5895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:55.566735 kernel: audit: type=1006 audit(1768348135.548:926): pid=5895 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 13 23:48:55.548000 audit[5895]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc4889af0 a2=3 a3=0 items=0 ppid=1 pid=5895 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:55.548000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:55.582194 kernel: audit: type=1300 audit(1768348135.548:926): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc4889af0 a2=3 a3=0 items=0 ppid=1 pid=5895 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:55.582324 kernel: audit: type=1327 audit(1768348135.548:926): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:55.582568 systemd-logind[1945]: New session 26 of user core. Jan 13 23:48:55.588873 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 23:48:55.596000 audit[5895]: USER_START pid=5895 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:55.610000 audit[5899]: CRED_ACQ pid=5899 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:55.618375 kernel: audit: type=1105 audit(1768348135.596:927): pid=5895 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:55.618537 kernel: audit: type=1103 audit(1768348135.610:928): pid=5899 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:55.905122 kubelet[3522]: E0113 23:48:55.904904 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d44f7875-phhfv" podUID="12f7eaa9-9d20-4e8f-9f20-d2118b28d17a" Jan 13 23:48:55.975555 sshd[5899]: Connection closed by 68.220.241.50 port 37080 Jan 13 23:48:55.976448 sshd-session[5895]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:55.982000 audit[5895]: USER_END pid=5895 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:55.982000 audit[5895]: CRED_DISP pid=5895 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:55.998663 kernel: audit: type=1106 audit(1768348135.982:929): pid=5895 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:55.998140 systemd[1]: sshd@24-172.31.28.147:22-68.220.241.50:37080.service: Deactivated successfully. Jan 13 23:48:56.001947 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 23:48:56.012737 kernel: audit: type=1104 audit(1768348135.982:930): pid=5895 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 13 23:48:55.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.28.147:22-68.220.241.50:37080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:56.012123 systemd-logind[1945]: Session 26 logged out. Waiting for processes to exit. Jan 13 23:48:56.017076 systemd-logind[1945]: Removed session 26. Jan 13 23:49:00.897672 containerd[1984]: time="2026-01-13T23:49:00.897540246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:49:01.310307 containerd[1984]: time="2026-01-13T23:49:01.309754576Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:01.312096 containerd[1984]: time="2026-01-13T23:49:01.311981764Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:49:01.312267 containerd[1984]: time="2026-01-13T23:49:01.312001192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:01.312453 kubelet[3522]: E0113 23:49:01.312373 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:49:01.313318 kubelet[3522]: E0113 23:49:01.312465 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:49:01.313393 containerd[1984]: time="2026-01-13T23:49:01.312980849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:49:01.313833 kubelet[3522]: E0113 23:49:01.313532 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ptt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d495df4b7-2bldm_calico-apiserver(db4e6234-4e3a-4385-ad17-4564cf1a27b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:01.315040 kubelet[3522]: E0113 23:49:01.314975 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-2bldm" podUID="db4e6234-4e3a-4385-ad17-4564cf1a27b8" Jan 13 23:49:01.575999 containerd[1984]: time="2026-01-13T23:49:01.575689110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:01.578209 containerd[1984]: time="2026-01-13T23:49:01.578078010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:49:01.578209 containerd[1984]: time="2026-01-13T23:49:01.578140902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:01.578692 kubelet[3522]: E0113 23:49:01.578616 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:49:01.578890 kubelet[3522]: E0113 23:49:01.578856 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:49:01.579242 kubelet[3522]: E0113 23:49:01.579170 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9hmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d495df4b7-p456r_calico-apiserver(38c0b046-9ddd-4e0f-99d2-de8c0748710c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:01.580654 kubelet[3522]: E0113 23:49:01.580617 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-p456r" podUID="38c0b046-9ddd-4e0f-99d2-de8c0748710c" Jan 13 23:49:03.902266 kubelet[3522]: E0113 23:49:03.902191 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:49:04.898738 kubelet[3522]: E0113 23:49:04.898648 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74c798dd46-h9tcx" podUID="d4afc04a-ba35-4c2d-9726-5e2240fe2e11" Jan 13 23:49:07.901083 containerd[1984]: time="2026-01-13T23:49:07.901016269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:49:08.296535 containerd[1984]: time="2026-01-13T23:49:08.296150699Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:08.298872 containerd[1984]: time="2026-01-13T23:49:08.298623899Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:49:08.298872 containerd[1984]: time="2026-01-13T23:49:08.298742063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:08.299084 kubelet[3522]: E0113 23:49:08.298978 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:49:08.299084 kubelet[3522]: E0113 23:49:08.299038 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:49:08.299860 kubelet[3522]: E0113 23:49:08.299227 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5g5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6fffm_calico-system(f27ddc53-e7e4-41b7-97d9-616c5339cc85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:08.300575 kubelet[3522]: E0113 23:49:08.300474 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6fffm" podUID="f27ddc53-e7e4-41b7-97d9-616c5339cc85" Jan 13 23:49:08.897585 containerd[1984]: time="2026-01-13T23:49:08.897238742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:49:09.164806 containerd[1984]: time="2026-01-13T23:49:09.164629320Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:09.167836 containerd[1984]: time="2026-01-13T23:49:09.167727456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:49:09.168140 containerd[1984]: time="2026-01-13T23:49:09.167797668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:09.168617 kubelet[3522]: E0113 23:49:09.168528 3522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:49:09.168995 kubelet[3522]: E0113 23:49:09.168751 3522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:49:09.169525 kubelet[3522]: E0113 23:49:09.169370 3522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86d44f7875-phhfv_calico-system(12f7eaa9-9d20-4e8f-9f20-d2118b28d17a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:09.170749 kubelet[3522]: E0113 23:49:09.170690 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d44f7875-phhfv" podUID="12f7eaa9-9d20-4e8f-9f20-d2118b28d17a" Jan 13 23:49:09.561905 systemd[1]: cri-containerd-7a207899fefd64bb05bf7c2670674792749a30d94947b22c2f9233f4b9bee0ad.scope: Deactivated successfully. Jan 13 23:49:09.562577 systemd[1]: cri-containerd-7a207899fefd64bb05bf7c2670674792749a30d94947b22c2f9233f4b9bee0ad.scope: Consumed 26.862s CPU time, 97.3M memory peak. Jan 13 23:49:09.568719 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:49:09.568864 kernel: audit: type=1334 audit(1768348149.565:932): prog-id=153 op=UNLOAD Jan 13 23:49:09.565000 audit: BPF prog-id=153 op=UNLOAD Jan 13 23:49:09.565000 audit: BPF prog-id=157 op=UNLOAD Jan 13 23:49:09.571566 containerd[1984]: time="2026-01-13T23:49:09.571184402Z" level=info msg="received container exit event container_id:\"7a207899fefd64bb05bf7c2670674792749a30d94947b22c2f9233f4b9bee0ad\" id:\"7a207899fefd64bb05bf7c2670674792749a30d94947b22c2f9233f4b9bee0ad\" pid:3851 exit_status:1 exited_at:{seconds:1768348149 nanos:570412046}" Jan 13 23:49:09.572454 kernel: audit: type=1334 audit(1768348149.565:933): prog-id=157 op=UNLOAD Jan 13 23:49:09.616399 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a207899fefd64bb05bf7c2670674792749a30d94947b22c2f9233f4b9bee0ad-rootfs.mount: Deactivated successfully. Jan 13 23:49:09.702258 kubelet[3522]: I0113 23:49:09.702167 3522 scope.go:117] "RemoveContainer" containerID="7a207899fefd64bb05bf7c2670674792749a30d94947b22c2f9233f4b9bee0ad" Jan 13 23:49:09.705833 containerd[1984]: time="2026-01-13T23:49:09.705772142Z" level=info msg="CreateContainer within sandbox \"8a7d0d002ebe6f49ed3bd9d02fc58f011a523d2ed2783d4ed0b46371bf7d10e1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 13 23:49:09.724047 containerd[1984]: time="2026-01-13T23:49:09.723471686Z" level=info msg="Container 11506b00f4a9b8e3352df57cfc546a0e6f9c03b7ea0ef1b915cb7d85783defe4: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:49:09.734864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2441945533.mount: Deactivated successfully. Jan 13 23:49:09.744215 containerd[1984]: time="2026-01-13T23:49:09.744141566Z" level=info msg="CreateContainer within sandbox \"8a7d0d002ebe6f49ed3bd9d02fc58f011a523d2ed2783d4ed0b46371bf7d10e1\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"11506b00f4a9b8e3352df57cfc546a0e6f9c03b7ea0ef1b915cb7d85783defe4\"" Jan 13 23:49:09.745541 containerd[1984]: time="2026-01-13T23:49:09.745058570Z" level=info msg="StartContainer for \"11506b00f4a9b8e3352df57cfc546a0e6f9c03b7ea0ef1b915cb7d85783defe4\"" Jan 13 23:49:09.747974 containerd[1984]: time="2026-01-13T23:49:09.747913094Z" level=info msg="connecting to shim 11506b00f4a9b8e3352df57cfc546a0e6f9c03b7ea0ef1b915cb7d85783defe4" address="unix:///run/containerd/s/786016f39148a78cab08c019cd952fd61c360712d71e68315abc0be9c700a075" protocol=ttrpc version=3 Jan 13 23:49:09.788878 systemd[1]: Started cri-containerd-11506b00f4a9b8e3352df57cfc546a0e6f9c03b7ea0ef1b915cb7d85783defe4.scope - libcontainer container 11506b00f4a9b8e3352df57cfc546a0e6f9c03b7ea0ef1b915cb7d85783defe4. Jan 13 23:49:09.811000 audit: BPF prog-id=263 op=LOAD Jan 13 23:49:09.814556 kernel: audit: type=1334 audit(1768348149.811:934): prog-id=263 op=LOAD Jan 13 23:49:09.814000 audit: BPF prog-id=264 op=LOAD Jan 13 23:49:09.814000 audit[5972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3650 pid=5972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.824108 kernel: audit: type=1334 audit(1768348149.814:935): prog-id=264 op=LOAD Jan 13 23:49:09.824233 kernel: audit: type=1300 audit(1768348149.814:935): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3650 pid=5972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353036623030663461396238653333353264663537636663353436 Jan 13 23:49:09.830704 kernel: audit: type=1327 audit(1768348149.814:935): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353036623030663461396238653333353264663537636663353436 Jan 13 23:49:09.830816 kernel: audit: type=1334 audit(1768348149.816:936): prog-id=264 op=UNLOAD Jan 13 23:49:09.816000 audit: BPF prog-id=264 op=UNLOAD Jan 13 23:49:09.816000 audit[5972]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3650 pid=5972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.838468 kernel: audit: type=1300 audit(1768348149.816:936): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3650 pid=5972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353036623030663461396238653333353264663537636663353436 Jan 13 23:49:09.845240 kernel: audit: type=1327 audit(1768348149.816:936): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353036623030663461396238653333353264663537636663353436 Jan 13 23:49:09.845377 kernel: audit: type=1334 audit(1768348149.816:937): prog-id=265 op=LOAD Jan 13 23:49:09.816000 audit: BPF prog-id=265 op=LOAD Jan 13 23:49:09.816000 audit[5972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3650 pid=5972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353036623030663461396238653333353264663537636663353436 Jan 13 23:49:09.816000 audit: BPF prog-id=266 op=LOAD Jan 13 23:49:09.816000 audit[5972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3650 pid=5972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353036623030663461396238653333353264663537636663353436 Jan 13 23:49:09.816000 audit: BPF prog-id=266 op=UNLOAD Jan 13 23:49:09.816000 audit[5972]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3650 pid=5972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353036623030663461396238653333353264663537636663353436 Jan 13 23:49:09.816000 audit: BPF prog-id=265 op=UNLOAD Jan 13 23:49:09.816000 audit[5972]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3650 pid=5972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353036623030663461396238653333353264663537636663353436 Jan 13 23:49:09.817000 audit: BPF prog-id=267 op=LOAD Jan 13 23:49:09.817000 audit[5972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3650 pid=5972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353036623030663461396238653333353264663537636663353436 Jan 13 23:49:09.880819 containerd[1984]: time="2026-01-13T23:49:09.880670199Z" level=info msg="StartContainer for \"11506b00f4a9b8e3352df57cfc546a0e6f9c03b7ea0ef1b915cb7d85783defe4\" returns successfully" Jan 13 23:49:10.336157 systemd[1]: cri-containerd-dad01a1114c6d840a7b4b60001f9c8597697badb3ce9eb73cd3944ea3ea1f29e.scope: Deactivated successfully. Jan 13 23:49:10.336854 systemd[1]: cri-containerd-dad01a1114c6d840a7b4b60001f9c8597697badb3ce9eb73cd3944ea3ea1f29e.scope: Consumed 5.498s CPU time, 59.4M memory peak. Jan 13 23:49:10.338000 audit: BPF prog-id=268 op=LOAD Jan 13 23:49:10.338000 audit: BPF prog-id=95 op=UNLOAD Jan 13 23:49:10.340000 audit: BPF prog-id=110 op=UNLOAD Jan 13 23:49:10.340000 audit: BPF prog-id=114 op=UNLOAD Jan 13 23:49:10.341934 containerd[1984]: time="2026-01-13T23:49:10.341819461Z" level=info msg="received container exit event container_id:\"dad01a1114c6d840a7b4b60001f9c8597697badb3ce9eb73cd3944ea3ea1f29e\" id:\"dad01a1114c6d840a7b4b60001f9c8597697badb3ce9eb73cd3944ea3ea1f29e\" pid:3213 exit_status:1 exited_at:{seconds:1768348150 nanos:340195369}" Jan 13 23:49:10.387816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dad01a1114c6d840a7b4b60001f9c8597697badb3ce9eb73cd3944ea3ea1f29e-rootfs.mount: Deactivated successfully. Jan 13 23:49:10.716156 kubelet[3522]: I0113 23:49:10.716041 3522 scope.go:117] "RemoveContainer" containerID="dad01a1114c6d840a7b4b60001f9c8597697badb3ce9eb73cd3944ea3ea1f29e" Jan 13 23:49:10.723526 containerd[1984]: time="2026-01-13T23:49:10.722356899Z" level=info msg="CreateContainer within sandbox \"078426d531698a8cf4fe86458d3609a5ec721286e8ab99c1befa622d98701a17\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 23:49:10.745556 containerd[1984]: time="2026-01-13T23:49:10.743671755Z" level=info msg="Container c5a47f0221de9cc11c9913886886627942d38368e1dc6bd39da2d4d03ced7f23: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:49:10.750285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924307647.mount: Deactivated successfully. Jan 13 23:49:10.769690 containerd[1984]: time="2026-01-13T23:49:10.769640415Z" level=info msg="CreateContainer within sandbox \"078426d531698a8cf4fe86458d3609a5ec721286e8ab99c1befa622d98701a17\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c5a47f0221de9cc11c9913886886627942d38368e1dc6bd39da2d4d03ced7f23\"" Jan 13 23:49:10.770922 containerd[1984]: time="2026-01-13T23:49:10.770872107Z" level=info msg="StartContainer for \"c5a47f0221de9cc11c9913886886627942d38368e1dc6bd39da2d4d03ced7f23\"" Jan 13 23:49:10.777829 containerd[1984]: time="2026-01-13T23:49:10.777709768Z" level=info msg="connecting to shim c5a47f0221de9cc11c9913886886627942d38368e1dc6bd39da2d4d03ced7f23" address="unix:///run/containerd/s/bc59c77e77f25ce8923894cce3d08b3fae624eb4cdbff68af2eabbd16ca4b3be" protocol=ttrpc version=3 Jan 13 23:49:10.827977 systemd[1]: Started cri-containerd-c5a47f0221de9cc11c9913886886627942d38368e1dc6bd39da2d4d03ced7f23.scope - libcontainer container c5a47f0221de9cc11c9913886886627942d38368e1dc6bd39da2d4d03ced7f23. Jan 13 23:49:10.861000 audit: BPF prog-id=269 op=LOAD Jan 13 23:49:10.862000 audit: BPF prog-id=270 op=LOAD Jan 13 23:49:10.862000 audit[6013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3015 pid=6013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:10.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335613437663032323164653963633131633939313338383638383636 Jan 13 23:49:10.862000 audit: BPF prog-id=270 op=UNLOAD Jan 13 23:49:10.862000 audit[6013]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3015 pid=6013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:10.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335613437663032323164653963633131633939313338383638383636 Jan 13 23:49:10.863000 audit: BPF prog-id=271 op=LOAD Jan 13 23:49:10.863000 audit[6013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3015 pid=6013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:10.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335613437663032323164653963633131633939313338383638383636 Jan 13 23:49:10.863000 audit: BPF prog-id=272 op=LOAD Jan 13 23:49:10.863000 audit[6013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3015 pid=6013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:10.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335613437663032323164653963633131633939313338383638383636 Jan 13 23:49:10.864000 audit: BPF prog-id=272 op=UNLOAD Jan 13 23:49:10.864000 audit[6013]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3015 pid=6013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:10.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335613437663032323164653963633131633939313338383638383636 Jan 13 23:49:10.864000 audit: BPF prog-id=271 op=UNLOAD Jan 13 23:49:10.864000 audit[6013]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3015 pid=6013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:10.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335613437663032323164653963633131633939313338383638383636 Jan 13 23:49:10.864000 audit: BPF prog-id=273 op=LOAD Jan 13 23:49:10.864000 audit[6013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3015 pid=6013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:10.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335613437663032323164653963633131633939313338383638383636 Jan 13 23:49:10.924279 containerd[1984]: time="2026-01-13T23:49:10.924131476Z" level=info msg="StartContainer for \"c5a47f0221de9cc11c9913886886627942d38368e1dc6bd39da2d4d03ced7f23\" returns successfully" Jan 13 23:49:12.896932 kubelet[3522]: E0113 23:49:12.896837 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-2bldm" podUID="db4e6234-4e3a-4385-ad17-4564cf1a27b8" Jan 13 23:49:15.785676 systemd[1]: cri-containerd-c83bfc99f552d075b03e32f27925d564adb343aca58adb0ab88b2819d3c75122.scope: Deactivated successfully. Jan 13 23:49:15.787000 audit: BPF prog-id=274 op=LOAD Jan 13 23:49:15.786887 systemd[1]: cri-containerd-c83bfc99f552d075b03e32f27925d564adb343aca58adb0ab88b2819d3c75122.scope: Consumed 5.435s CPU time, 23M memory peak. Jan 13 23:49:15.790359 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 13 23:49:15.791258 kernel: audit: type=1334 audit(1768348155.787:954): prog-id=274 op=LOAD Jan 13 23:49:15.791632 kernel: audit: type=1334 audit(1768348155.787:955): prog-id=100 op=UNLOAD Jan 13 23:49:15.787000 audit: BPF prog-id=100 op=UNLOAD Jan 13 23:49:15.792000 audit: BPF prog-id=115 op=UNLOAD Jan 13 23:49:15.795416 kernel: audit: type=1334 audit(1768348155.792:956): prog-id=115 op=UNLOAD Jan 13 23:49:15.795597 kernel: audit: type=1334 audit(1768348155.792:957): prog-id=119 op=UNLOAD Jan 13 23:49:15.792000 audit: BPF prog-id=119 op=UNLOAD Jan 13 23:49:15.799437 containerd[1984]: time="2026-01-13T23:49:15.799360616Z" level=info msg="received container exit event container_id:\"c83bfc99f552d075b03e32f27925d564adb343aca58adb0ab88b2819d3c75122\" id:\"c83bfc99f552d075b03e32f27925d564adb343aca58adb0ab88b2819d3c75122\" pid:3242 exit_status:1 exited_at:{seconds:1768348155 nanos:798959684}" Jan 13 23:49:15.843314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c83bfc99f552d075b03e32f27925d564adb343aca58adb0ab88b2819d3c75122-rootfs.mount: Deactivated successfully. Jan 13 23:49:16.747636 kubelet[3522]: I0113 23:49:16.746794 3522 scope.go:117] "RemoveContainer" containerID="c83bfc99f552d075b03e32f27925d564adb343aca58adb0ab88b2819d3c75122" Jan 13 23:49:16.753775 containerd[1984]: time="2026-01-13T23:49:16.753706185Z" level=info msg="CreateContainer within sandbox \"004b00c0f3db68828e0356e2b05fc24082e8370ab8f4389eaeaf3b88deeadd1b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 23:49:16.772957 containerd[1984]: time="2026-01-13T23:49:16.772891965Z" level=info msg="Container 25e742b91e20a171848a6f823e72aa178e3c32f8e4fe650f7c0e3582d4753e38: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:49:16.794327 containerd[1984]: time="2026-01-13T23:49:16.794263401Z" level=info msg="CreateContainer within sandbox \"004b00c0f3db68828e0356e2b05fc24082e8370ab8f4389eaeaf3b88deeadd1b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"25e742b91e20a171848a6f823e72aa178e3c32f8e4fe650f7c0e3582d4753e38\"" Jan 13 23:49:16.795117 containerd[1984]: time="2026-01-13T23:49:16.795071229Z" level=info msg="StartContainer for \"25e742b91e20a171848a6f823e72aa178e3c32f8e4fe650f7c0e3582d4753e38\"" Jan 13 23:49:16.797426 containerd[1984]: time="2026-01-13T23:49:16.797347977Z" level=info msg="connecting to shim 25e742b91e20a171848a6f823e72aa178e3c32f8e4fe650f7c0e3582d4753e38" address="unix:///run/containerd/s/0150ef3e24f409cb652c5b7650d7f1fd90837fd43089125ce1131c8dddf8fc37" protocol=ttrpc version=3 Jan 13 23:49:16.841888 systemd[1]: Started cri-containerd-25e742b91e20a171848a6f823e72aa178e3c32f8e4fe650f7c0e3582d4753e38.scope - libcontainer container 25e742b91e20a171848a6f823e72aa178e3c32f8e4fe650f7c0e3582d4753e38. Jan 13 23:49:16.867000 audit: BPF prog-id=275 op=LOAD Jan 13 23:49:16.869000 audit: BPF prog-id=276 op=LOAD Jan 13 23:49:16.871588 kernel: audit: type=1334 audit(1768348156.867:958): prog-id=275 op=LOAD Jan 13 23:49:16.871685 kernel: audit: type=1334 audit(1768348156.869:959): prog-id=276 op=LOAD Jan 13 23:49:16.869000 audit[6057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3043 pid=6057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.878168 kernel: audit: type=1300 audit(1768348156.869:959): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3043 pid=6057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235653734326239316532306131373138343861366638323365373261 Jan 13 23:49:16.884041 kernel: audit: type=1327 audit(1768348156.869:959): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235653734326239316532306131373138343861366638323365373261 Jan 13 23:49:16.869000 audit: BPF prog-id=276 op=UNLOAD Jan 13 23:49:16.887138 kernel: audit: type=1334 audit(1768348156.869:960): prog-id=276 op=UNLOAD Jan 13 23:49:16.887624 kernel: audit: type=1300 audit(1768348156.869:960): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3043 pid=6057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.869000 audit[6057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3043 pid=6057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235653734326239316532306131373138343861366638323365373261 Jan 13 23:49:16.870000 audit: BPF prog-id=277 op=LOAD Jan 13 23:49:16.870000 audit[6057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3043 pid=6057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235653734326239316532306131373138343861366638323365373261 Jan 13 23:49:16.877000 audit: BPF prog-id=278 op=LOAD Jan 13 23:49:16.877000 audit[6057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3043 pid=6057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235653734326239316532306131373138343861366638323365373261 Jan 13 23:49:16.883000 audit: BPF prog-id=278 op=UNLOAD Jan 13 23:49:16.883000 audit[6057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3043 pid=6057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235653734326239316532306131373138343861366638323365373261 Jan 13 23:49:16.883000 audit: BPF prog-id=277 op=UNLOAD Jan 13 23:49:16.883000 audit[6057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3043 pid=6057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235653734326239316532306131373138343861366638323365373261 Jan 13 23:49:16.883000 audit: BPF prog-id=279 op=LOAD Jan 13 23:49:16.883000 audit[6057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3043 pid=6057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235653734326239316532306131373138343861366638323365373261 Jan 13 23:49:16.898454 kubelet[3522]: E0113 23:49:16.898385 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d495df4b7-p456r" podUID="38c0b046-9ddd-4e0f-99d2-de8c0748710c" Jan 13 23:49:16.958747 containerd[1984]: time="2026-01-13T23:49:16.958666174Z" level=info msg="StartContainer for \"25e742b91e20a171848a6f823e72aa178e3c32f8e4fe650f7c0e3582d4753e38\" returns successfully" Jan 13 23:49:17.263626 kubelet[3522]: E0113 23:49:17.263541 3522 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-147?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 23:49:17.903496 kubelet[3522]: E0113 23:49:17.903422 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t67lh" podUID="62188975-46ce-424e-8434-9b05ea3b2915" Jan 13 23:49:18.899402 kubelet[3522]: E0113 23:49:18.899326 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74c798dd46-h9tcx" podUID="d4afc04a-ba35-4c2d-9726-5e2240fe2e11" Jan 13 23:49:20.897437 kubelet[3522]: E0113 23:49:20.897274 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d44f7875-phhfv" podUID="12f7eaa9-9d20-4e8f-9f20-d2118b28d17a" Jan 13 23:49:21.362947 systemd[1]: cri-containerd-11506b00f4a9b8e3352df57cfc546a0e6f9c03b7ea0ef1b915cb7d85783defe4.scope: Deactivated successfully. Jan 13 23:49:21.367000 audit: BPF prog-id=263 op=UNLOAD Jan 13 23:49:21.368976 kernel: kauditd_printk_skb: 16 callbacks suppressed Jan 13 23:49:21.369156 kernel: audit: type=1334 audit(1768348161.367:966): prog-id=263 op=UNLOAD Jan 13 23:49:21.367000 audit: BPF prog-id=267 op=UNLOAD Jan 13 23:49:21.371221 containerd[1984]: time="2026-01-13T23:49:21.371065020Z" level=info msg="received container exit event container_id:\"11506b00f4a9b8e3352df57cfc546a0e6f9c03b7ea0ef1b915cb7d85783defe4\" id:\"11506b00f4a9b8e3352df57cfc546a0e6f9c03b7ea0ef1b915cb7d85783defe4\" pid:5985 exit_status:1 exited_at:{seconds:1768348161 nanos:367279860}" Jan 13 23:49:21.373169 kernel: audit: type=1334 audit(1768348161.367:967): prog-id=267 op=UNLOAD Jan 13 23:49:21.413872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11506b00f4a9b8e3352df57cfc546a0e6f9c03b7ea0ef1b915cb7d85783defe4-rootfs.mount: Deactivated successfully. Jan 13 23:49:21.775003 kubelet[3522]: I0113 23:49:21.774940 3522 scope.go:117] "RemoveContainer" containerID="7a207899fefd64bb05bf7c2670674792749a30d94947b22c2f9233f4b9bee0ad" Jan 13 23:49:21.776061 kubelet[3522]: I0113 23:49:21.775585 3522 scope.go:117] "RemoveContainer" containerID="11506b00f4a9b8e3352df57cfc546a0e6f9c03b7ea0ef1b915cb7d85783defe4" Jan 13 23:49:21.776230 kubelet[3522]: E0113 23:49:21.776082 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-2q6ct_tigera-operator(75fc2771-b89d-4cd3-861b-e7d66b68a6b9)\"" pod="tigera-operator/tigera-operator-7dcd859c48-2q6ct" podUID="75fc2771-b89d-4cd3-861b-e7d66b68a6b9" Jan 13 23:49:21.778831 containerd[1984]: time="2026-01-13T23:49:21.778777922Z" level=info msg="RemoveContainer for \"7a207899fefd64bb05bf7c2670674792749a30d94947b22c2f9233f4b9bee0ad\"" Jan 13 23:49:21.787627 containerd[1984]: time="2026-01-13T23:49:21.787434206Z" level=info msg="RemoveContainer for \"7a207899fefd64bb05bf7c2670674792749a30d94947b22c2f9233f4b9bee0ad\" returns successfully"